Nov 26 13:23:35 crc systemd[1]: Starting Kubernetes Kubelet... Nov 26 13:23:35 crc restorecon[4669]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:35 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 26 13:23:36 crc restorecon[4669]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 26 13:23:36 crc kubenswrapper[4695]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:23:36 crc kubenswrapper[4695]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 26 13:23:36 crc kubenswrapper[4695]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:23:36 crc kubenswrapper[4695]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:23:36 crc kubenswrapper[4695]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 26 13:23:36 crc kubenswrapper[4695]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.907100 4695 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913877 4695 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913906 4695 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913914 4695 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913921 4695 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913928 4695 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913936 4695 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913942 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913948 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913953 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913958 4695 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913964 4695 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913970 4695 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913975 4695 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913981 4695 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913987 4695 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913992 4695 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.913998 4695 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914003 4695 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914009 4695 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914015 4695 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914020 4695 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914025 4695 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914031 4695 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914036 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914041 4695 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914049 4695 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914054 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914060 4695 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914065 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914074 4695 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914081 4695 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914088 4695 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914095 4695 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914102 4695 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914108 4695 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914113 4695 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914118 4695 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914124 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914129 4695 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914134 4695 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914140 4695 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914145 4695 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914150 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914158 4695 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914163 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914168 4695 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914174 4695 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914179 4695 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914185 4695 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914190 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914195 4695 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914201 4695 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914206 4695 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914212 4695 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914219 4695 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914225 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914231 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914238 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914248 4695 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914254 4695 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914260 4695 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914266 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914272 4695 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914278 4695 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914284 4695 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914291 4695 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914296 4695 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914302 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914308 4695 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914315 4695 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.914322 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914474 4695 flags.go:64] FLAG: --address="0.0.0.0" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914490 4695 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914501 4695 flags.go:64] FLAG: --anonymous-auth="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914509 4695 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914518 4695 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914525 4695 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914533 4695 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914542 4695 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914549 4695 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914555 4695 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914562 4695 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914569 4695 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914575 4695 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914581 4695 flags.go:64] FLAG: --cgroup-root="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914588 4695 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914594 4695 flags.go:64] FLAG: --client-ca-file="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914600 4695 flags.go:64] FLAG: --cloud-config="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914605 4695 flags.go:64] FLAG: --cloud-provider="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914611 4695 flags.go:64] FLAG: --cluster-dns="[]" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914648 4695 flags.go:64] FLAG: --cluster-domain="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914655 4695 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914661 4695 flags.go:64] FLAG: --config-dir="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914667 4695 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914673 4695 flags.go:64] FLAG: --container-log-max-files="5" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914682 4695 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914688 4695 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914694 4695 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914701 4695 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914707 4695 flags.go:64] FLAG: --contention-profiling="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914713 4695 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914719 4695 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914726 4695 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914731 4695 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914739 4695 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914745 4695 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914752 4695 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914758 4695 flags.go:64] FLAG: --enable-load-reader="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914767 4695 flags.go:64] FLAG: --enable-server="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914774 4695 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914784 4695 flags.go:64] FLAG: --event-burst="100" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914791 4695 flags.go:64] FLAG: --event-qps="50" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914798 4695 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914804 4695 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914811 4695 flags.go:64] FLAG: --eviction-hard="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914818 4695 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914825 4695 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914832 4695 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914838 4695 flags.go:64] FLAG: --eviction-soft="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914845 4695 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914851 4695 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914858 4695 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914864 4695 flags.go:64] FLAG: --experimental-mounter-path="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914870 4695 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914876 4695 flags.go:64] FLAG: --fail-swap-on="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914883 4695 flags.go:64] FLAG: --feature-gates="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914890 4695 flags.go:64] FLAG: --file-check-frequency="20s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914897 4695 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914903 4695 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914910 4695 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914916 4695 flags.go:64] FLAG: --healthz-port="10248" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914922 4695 flags.go:64] FLAG: --help="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914929 4695 flags.go:64] FLAG: --hostname-override="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914935 4695 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914942 4695 flags.go:64] FLAG: --http-check-frequency="20s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914948 4695 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914954 4695 flags.go:64] FLAG: --image-credential-provider-config="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914961 4695 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914967 4695 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914975 4695 flags.go:64] FLAG: --image-service-endpoint="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914981 4695 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914987 4695 flags.go:64] FLAG: --kube-api-burst="100" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.914995 4695 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915001 4695 flags.go:64] FLAG: --kube-api-qps="50" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915018 4695 flags.go:64] FLAG: --kube-reserved="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915024 4695 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915030 4695 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915036 4695 flags.go:64] FLAG: --kubelet-cgroups="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915043 4695 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915049 4695 flags.go:64] FLAG: --lock-file="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915055 4695 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915061 4695 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915068 4695 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915078 4695 flags.go:64] FLAG: --log-json-split-stream="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915085 4695 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915092 4695 flags.go:64] FLAG: --log-text-split-stream="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915098 4695 flags.go:64] FLAG: --logging-format="text" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915104 4695 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915111 4695 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915116 4695 flags.go:64] FLAG: --manifest-url="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915122 4695 flags.go:64] FLAG: --manifest-url-header="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915130 4695 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915136 4695 flags.go:64] FLAG: --max-open-files="1000000" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915144 4695 flags.go:64] FLAG: --max-pods="110" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915150 4695 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915157 4695 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915163 4695 flags.go:64] FLAG: --memory-manager-policy="None" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915168 4695 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915175 4695 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915181 4695 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915187 4695 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915201 4695 flags.go:64] FLAG: --node-status-max-images="50" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915208 4695 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915214 4695 flags.go:64] FLAG: --oom-score-adj="-999" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915221 4695 flags.go:64] FLAG: --pod-cidr="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915227 4695 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915243 4695 flags.go:64] FLAG: --pod-manifest-path="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915249 4695 flags.go:64] FLAG: --pod-max-pids="-1" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915256 4695 flags.go:64] FLAG: --pods-per-core="0" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915263 4695 flags.go:64] FLAG: --port="10250" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915270 4695 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915276 4695 flags.go:64] FLAG: --provider-id="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915281 4695 flags.go:64] FLAG: --qos-reserved="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915287 4695 flags.go:64] FLAG: --read-only-port="10255" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915294 4695 flags.go:64] FLAG: --register-node="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915300 4695 flags.go:64] FLAG: --register-schedulable="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915308 4695 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915320 4695 flags.go:64] FLAG: --registry-burst="10" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915328 4695 flags.go:64] FLAG: --registry-qps="5" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915336 4695 flags.go:64] FLAG: --reserved-cpus="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915364 4695 flags.go:64] FLAG: --reserved-memory="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915376 4695 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915383 4695 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915389 4695 flags.go:64] FLAG: --rotate-certificates="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915396 4695 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915402 4695 flags.go:64] FLAG: --runonce="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915408 4695 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915415 4695 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915421 4695 flags.go:64] FLAG: --seccomp-default="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915428 4695 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915434 4695 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915441 4695 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915447 4695 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915454 4695 flags.go:64] FLAG: --storage-driver-password="root" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915460 4695 flags.go:64] FLAG: --storage-driver-secure="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915467 4695 flags.go:64] FLAG: --storage-driver-table="stats" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915481 4695 flags.go:64] FLAG: --storage-driver-user="root" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915488 4695 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915494 4695 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915501 4695 flags.go:64] FLAG: --system-cgroups="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915507 4695 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915517 4695 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915523 4695 flags.go:64] FLAG: --tls-cert-file="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915529 4695 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915540 4695 flags.go:64] FLAG: --tls-min-version="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915547 4695 flags.go:64] FLAG: --tls-private-key-file="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915554 4695 flags.go:64] FLAG: --topology-manager-policy="none" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915560 4695 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915566 4695 flags.go:64] FLAG: --topology-manager-scope="container" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915572 4695 flags.go:64] FLAG: --v="2" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915581 4695 flags.go:64] FLAG: --version="false" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915590 4695 flags.go:64] FLAG: --vmodule="" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915597 4695 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.915603 4695 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915745 4695 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915753 4695 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915760 4695 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915766 4695 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915771 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915776 4695 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915782 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915787 4695 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915792 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915797 4695 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915803 4695 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915808 4695 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915813 4695 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915837 4695 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915844 4695 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915850 4695 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915856 4695 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915862 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915868 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915873 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915878 4695 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915884 4695 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915890 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915895 4695 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915900 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915905 4695 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915910 4695 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915916 4695 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915922 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915928 4695 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915935 4695 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915942 4695 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915947 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915954 4695 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915959 4695 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915965 4695 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915972 4695 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915979 4695 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915986 4695 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915993 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.915998 4695 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916004 4695 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916010 4695 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916015 4695 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916021 4695 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916028 4695 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916035 4695 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916041 4695 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916048 4695 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916054 4695 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916061 4695 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916067 4695 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916072 4695 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916078 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916083 4695 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916088 4695 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916094 4695 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916099 4695 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916104 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916109 4695 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916115 4695 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916120 4695 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916126 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916131 4695 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916137 4695 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916142 4695 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916147 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916152 4695 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916158 4695 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916163 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.916168 4695 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.916185 4695 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.932994 4695 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.933049 4695 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933186 4695 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933201 4695 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933210 4695 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933219 4695 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933227 4695 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933235 4695 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933242 4695 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933253 4695 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933266 4695 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933277 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933285 4695 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933293 4695 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933303 4695 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933311 4695 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933319 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933328 4695 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933336 4695 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933369 4695 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933378 4695 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933386 4695 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933394 4695 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933402 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933412 4695 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933422 4695 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933431 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933441 4695 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933451 4695 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933459 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933467 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933475 4695 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933482 4695 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933490 4695 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933498 4695 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933506 4695 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933515 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933524 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933531 4695 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933539 4695 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933547 4695 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933554 4695 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933562 4695 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933573 4695 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933581 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933588 4695 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933596 4695 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933604 4695 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933611 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933619 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933626 4695 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933635 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933642 4695 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933650 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933657 4695 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933665 4695 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933672 4695 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933680 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933688 4695 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933696 4695 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933704 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933711 4695 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933719 4695 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933727 4695 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933735 4695 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933742 4695 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933751 4695 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933758 4695 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933766 4695 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933777 4695 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933787 4695 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933797 4695 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.933806 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.933820 4695 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934036 4695 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934050 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934059 4695 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934067 4695 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934075 4695 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934083 4695 feature_gate.go:330] unrecognized feature gate: Example Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934091 4695 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934098 4695 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934107 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934115 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934126 4695 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934137 4695 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934145 4695 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934153 4695 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934161 4695 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934169 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934177 4695 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934185 4695 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934195 4695 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934205 4695 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934214 4695 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934223 4695 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934230 4695 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934238 4695 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934246 4695 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934254 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934261 4695 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934269 4695 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934277 4695 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934284 4695 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934292 4695 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934301 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934309 4695 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934317 4695 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934326 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934334 4695 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934342 4695 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934374 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934382 4695 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934389 4695 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934398 4695 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934406 4695 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934414 4695 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934423 4695 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934430 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934438 4695 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934446 4695 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934453 4695 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934461 4695 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934471 4695 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934480 4695 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934488 4695 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934496 4695 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934503 4695 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934510 4695 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934518 4695 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934526 4695 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934534 4695 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934542 4695 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934550 4695 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934557 4695 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934565 4695 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934572 4695 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934580 4695 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934587 4695 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934597 4695 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934609 4695 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934617 4695 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934626 4695 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934674 4695 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 26 13:23:36 crc kubenswrapper[4695]: W1126 13:23:36.934684 4695 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.934696 4695 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.935931 4695 server.go:940] "Client rotation is on, will bootstrap in background" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.941662 4695 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.941788 4695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.943869 4695 server.go:997] "Starting client certificate rotation" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.943920 4695 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.944249 4695 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 18:27:17.414720014 +0000 UTC Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.944433 4695 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.973045 4695 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 13:23:36 crc kubenswrapper[4695]: E1126 13:23:36.978177 4695 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.981702 4695 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 13:23:36 crc kubenswrapper[4695]: I1126 13:23:36.997515 4695 log.go:25] "Validated CRI v1 runtime API" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.035372 4695 log.go:25] "Validated CRI v1 image API" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.038241 4695 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.045417 4695 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-26-13-19-25-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.045469 4695 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.070998 4695 manager.go:217] Machine: {Timestamp:2025-11-26 13:23:37.068849914 +0000 UTC m=+0.704674996 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:38c50ac0-92c3-4f5b-bd42-96718c941574 BootID:2a904109-f06a-4e5e-98fe-96acd68c2c44 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:02:85:4a Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:02:85:4a Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0c:34:12 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:a8:9d:d6 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:33:10:a2 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:c2:6e:23 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:e1:e3:be:02:25 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:d6:68:c9:41:9b:f0 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.071313 4695 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.071647 4695 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.074154 4695 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.074406 4695 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.074463 4695 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.076130 4695 topology_manager.go:138] "Creating topology manager with none policy" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.076161 4695 container_manager_linux.go:303] "Creating device plugin manager" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.076806 4695 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.076845 4695 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.077149 4695 state_mem.go:36] "Initialized new in-memory state store" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.077267 4695 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.081594 4695 kubelet.go:418] "Attempting to sync node with API server" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.081631 4695 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.081650 4695 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.081665 4695 kubelet.go:324] "Adding apiserver pod source" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.081679 4695 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.086481 4695 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.087483 4695 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.090987 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.091221 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.091324 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.091467 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.093244 4695 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097477 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097543 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097555 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097563 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097577 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097585 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097596 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097631 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097644 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097654 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097672 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.097698 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.099923 4695 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.100688 4695 server.go:1280] "Started kubelet" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.102271 4695 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.102273 4695 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.102858 4695 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.102904 4695 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 26 13:23:37 crc systemd[1]: Started Kubernetes Kubelet. Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.102967 4695 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 01:58:46.365785809 +0000 UTC Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.103024 4695 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 348h35m9.262766052s for next certificate rotation Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.103242 4695 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.103264 4695 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.103298 4695 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.103373 4695 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.103540 4695 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.103950 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.104276 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.104134 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="200ms" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.104694 4695 factory.go:55] Registering systemd factory Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.104736 4695 factory.go:221] Registration of the systemd container factory successfully Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.105077 4695 factory.go:153] Registering CRI-O factory Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.105104 4695 factory.go:221] Registration of the crio container factory successfully Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.105201 4695 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.105242 4695 factory.go:103] Registering Raw factory Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.105270 4695 manager.go:1196] Started watching for new ooms in manager Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.106216 4695 manager.go:319] Starting recovery of all containers Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.106741 4695 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.106595 4695 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b9147ea059004 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 13:23:37.10061978 +0000 UTC m=+0.736444872,LastTimestamp:2025-11-26 13:23:37.10061978 +0000 UTC m=+0.736444872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.110635 4695 server.go:460] "Adding debug handlers to kubelet server" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.121869 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.121988 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122019 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122060 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122088 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122123 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122149 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122175 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122220 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122300 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122339 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122404 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122438 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122507 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122636 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122667 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122697 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122745 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.122888 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123018 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123081 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123111 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123220 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123244 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123279 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123310 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123424 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123465 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123504 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123537 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123593 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123615 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123718 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123757 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123787 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123845 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123875 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123908 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123966 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.123994 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124028 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124156 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124193 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124244 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124281 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124325 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124440 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124469 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124506 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124532 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124567 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124646 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124710 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124800 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124870 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124913 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124944 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.124981 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125084 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125138 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125167 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125195 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125259 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125285 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125406 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125450 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125479 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125542 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125628 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125733 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125789 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125816 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125852 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125924 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125963 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.125991 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.126019 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.126058 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127468 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127527 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127556 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127595 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127619 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127649 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127676 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127698 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127728 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127760 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127790 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127813 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127835 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127865 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127888 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127919 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127942 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127966 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.127996 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128037 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128066 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128090 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128122 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128151 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128174 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128203 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128241 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128282 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128315 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128372 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128406 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128437 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128547 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.128696 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129458 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129484 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129509 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129532 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129561 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129583 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129602 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129634 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129656 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129680 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129694 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129717 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129740 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129757 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129771 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129804 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129820 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129844 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129929 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.129945 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.132947 4695 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133006 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133027 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133040 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133053 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133069 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133082 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133095 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133108 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133121 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133133 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133147 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133158 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133171 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133185 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133203 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133216 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133231 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133245 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133261 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133279 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133315 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133333 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133375 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133392 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133409 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133421 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133434 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133501 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133519 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133535 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133550 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133564 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133581 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133597 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133613 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133646 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133661 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133677 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133694 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133711 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133726 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133741 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133758 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133776 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133791 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133809 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133825 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133842 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133859 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133875 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133890 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133904 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133918 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133933 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133949 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133966 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133981 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.133996 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134013 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134030 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134061 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134078 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134099 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134115 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134133 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134148 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134163 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134175 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134188 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134199 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134210 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134222 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134236 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134251 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134262 4695 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134273 4695 reconstruct.go:97] "Volume reconstruction finished" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.134280 4695 reconciler.go:26] "Reconciler: start to sync state" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.144210 4695 manager.go:324] Recovery completed Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.154050 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.156687 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.156796 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.156833 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.157038 4695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.157758 4695 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.157786 4695 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.157811 4695 state_mem.go:36] "Initialized new in-memory state store" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.160732 4695 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.160858 4695 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.160954 4695 kubelet.go:2335] "Starting kubelet main sync loop" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.161054 4695 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.162419 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.162514 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.177045 4695 policy_none.go:49] "None policy: Start" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.177999 4695 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.178117 4695 state_mem.go:35] "Initializing new in-memory state store" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.204440 4695 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.242078 4695 manager.go:334] "Starting Device Plugin manager" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.242174 4695 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.242199 4695 server.go:79] "Starting device plugin registration server" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.243039 4695 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.243075 4695 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.243390 4695 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.243510 4695 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.243520 4695 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.251685 4695 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.262152 4695 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.262290 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.265487 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.265533 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.265544 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.265684 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.265973 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.266064 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.266691 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.266776 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.266808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.267063 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.267231 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.267272 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.267955 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.267977 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.268002 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.268332 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.268433 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.268451 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.268672 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.268744 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.268776 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.269835 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.269908 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.269933 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270304 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270392 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270416 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270459 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270489 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270663 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270744 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.270774 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.271782 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.271844 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.271857 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.273229 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.273280 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.273291 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.273655 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.273709 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.274775 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.274869 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.274886 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.304981 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="400ms" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337425 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337507 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337550 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337626 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337716 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337772 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337804 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337820 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337839 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337868 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337914 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337956 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.337980 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.338004 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.338021 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.343720 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.345407 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.345458 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.345476 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.345515 4695 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.346000 4695 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439203 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439284 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439319 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439375 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439398 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439416 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439436 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439456 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439459 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439525 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439535 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439609 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439643 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439676 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439686 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439773 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439772 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439796 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439863 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439889 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439909 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439949 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439967 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.439988 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.440016 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.440069 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.440091 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.440070 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.440128 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.440235 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.546838 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.548334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.548389 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.548399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.548426 4695 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.548781 4695 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.618034 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.623724 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.642954 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.663491 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-cb5ed22167c922a40f9127b9fe3bc78c4578e9cdc0dcf24fcf602f4ecd260f86 WatchSource:0}: Error finding container cb5ed22167c922a40f9127b9fe3bc78c4578e9cdc0dcf24fcf602f4ecd260f86: Status 404 returned error can't find the container with id cb5ed22167c922a40f9127b9fe3bc78c4578e9cdc0dcf24fcf602f4ecd260f86 Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.664766 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4fe60a24f9d188378472d572983d53d195eb8ecab10e85fa1a18ce5b90f2ec4d WatchSource:0}: Error finding container 4fe60a24f9d188378472d572983d53d195eb8ecab10e85fa1a18ce5b90f2ec4d: Status 404 returned error can't find the container with id 4fe60a24f9d188378472d572983d53d195eb8ecab10e85fa1a18ce5b90f2ec4d Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.667566 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.676065 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.684361 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-8f73876ffca09481de881d8a422967eeffb38dca6ed2f8fb0ffaac3e84d4b57c WatchSource:0}: Error finding container 8f73876ffca09481de881d8a422967eeffb38dca6ed2f8fb0ffaac3e84d4b57c: Status 404 returned error can't find the container with id 8f73876ffca09481de881d8a422967eeffb38dca6ed2f8fb0ffaac3e84d4b57c Nov 26 13:23:37 crc kubenswrapper[4695]: W1126 13:23:37.702104 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-495419350eb85bb0901a69d135f35a8a5f4e1b683f92234b2f8ef6b259195957 WatchSource:0}: Error finding container 495419350eb85bb0901a69d135f35a8a5f4e1b683f92234b2f8ef6b259195957: Status 404 returned error can't find the container with id 495419350eb85bb0901a69d135f35a8a5f4e1b683f92234b2f8ef6b259195957 Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.705572 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="800ms" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.949550 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.951209 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.951247 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.951260 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:37 crc kubenswrapper[4695]: I1126 13:23:37.951286 4695 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:23:37 crc kubenswrapper[4695]: E1126 13:23:37.951610 4695 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.108293 4695 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.166118 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4fa17a9ab42d8ad4a77e751a7a30c5fd89a272d754156308af110542e76ec24f"} Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.169794 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4fe60a24f9d188378472d572983d53d195eb8ecab10e85fa1a18ce5b90f2ec4d"} Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.171554 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"cb5ed22167c922a40f9127b9fe3bc78c4578e9cdc0dcf24fcf602f4ecd260f86"} Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.172934 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"495419350eb85bb0901a69d135f35a8a5f4e1b683f92234b2f8ef6b259195957"} Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.174544 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8f73876ffca09481de881d8a422967eeffb38dca6ed2f8fb0ffaac3e84d4b57c"} Nov 26 13:23:38 crc kubenswrapper[4695]: W1126 13:23:38.249471 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:38 crc kubenswrapper[4695]: E1126 13:23:38.249593 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:38 crc kubenswrapper[4695]: W1126 13:23:38.252187 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:38 crc kubenswrapper[4695]: E1126 13:23:38.252276 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:38 crc kubenswrapper[4695]: W1126 13:23:38.264156 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:38 crc kubenswrapper[4695]: E1126 13:23:38.264250 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:38 crc kubenswrapper[4695]: W1126 13:23:38.346793 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:38 crc kubenswrapper[4695]: E1126 13:23:38.346893 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:38 crc kubenswrapper[4695]: E1126 13:23:38.506241 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="1.6s" Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.752607 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.753866 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.753904 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.753915 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:38 crc kubenswrapper[4695]: I1126 13:23:38.753938 4695 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:23:38 crc kubenswrapper[4695]: E1126 13:23:38.754508 4695 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.107828 4695 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.167133 4695 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 26 13:23:39 crc kubenswrapper[4695]: E1126 13:23:39.168296 4695 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.181870 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.182150 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.182434 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.182651 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.182180 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.184826 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.184894 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.184954 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.185097 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e" exitCode=0 Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.185317 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.185659 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.186739 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.186805 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.186826 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.188323 4695 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="9529b606245f24a456aca8b461125946fdc44c82fad6576299ed9e1d2a28f425" exitCode=0 Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.188635 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.188719 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"9529b606245f24a456aca8b461125946fdc44c82fad6576299ed9e1d2a28f425"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.189415 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.189963 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.190015 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.190038 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.190607 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.190637 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.190648 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.191520 4695 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9" exitCode=0 Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.191615 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.191832 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.193664 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.193698 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.193709 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.194436 4695 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="0aea7b67799276f736c3a10bb3ac12f5300ef604e0b3f1d61b99fdce84d93913" exitCode=0 Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.194487 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"0aea7b67799276f736c3a10bb3ac12f5300ef604e0b3f1d61b99fdce84d93913"} Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.194566 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.195800 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.195873 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:39 crc kubenswrapper[4695]: I1126 13:23:39.195900 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.107493 4695 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:40 crc kubenswrapper[4695]: E1126 13:23:40.107603 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="3.2s" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.201580 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.201656 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.201673 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.201688 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.203623 4695 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6a3c5fbe50fff4d819077adb5ec1382b264979f3b7c7fdaf80493babf77dd2ce" exitCode=0 Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.203698 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6a3c5fbe50fff4d819077adb5ec1382b264979f3b7c7fdaf80493babf77dd2ce"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.203795 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.204935 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.204966 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.204976 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.207195 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.207265 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.207268 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.207398 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.210053 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.210095 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.210109 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.213469 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.213659 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.213670 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"1c91627a542bd913053affc7fdfc48889b180b899de2436b3137417e3173c472"} Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.215326 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.215387 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.215405 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.215571 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.215645 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.215666 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.354888 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.356493 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.356549 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.356564 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.356597 4695 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:23:40 crc kubenswrapper[4695]: E1126 13:23:40.357240 4695 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.156:6443: connect: connection refused" node="crc" Nov 26 13:23:40 crc kubenswrapper[4695]: I1126 13:23:40.431819 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:40 crc kubenswrapper[4695]: W1126 13:23:40.625116 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.156:6443: connect: connection refused Nov 26 13:23:40 crc kubenswrapper[4695]: E1126 13:23:40.625204 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.156:6443: connect: connection refused" logger="UnhandledError" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.226879 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845"} Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.226922 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.228610 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.228644 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.228654 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229084 4695 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8db312de9c19928748f17c7e6f713cc8c003cedb48ad4ac58730d5eb3b759a60" exitCode=0 Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229138 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8db312de9c19928748f17c7e6f713cc8c003cedb48ad4ac58730d5eb3b759a60"} Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229158 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229213 4695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229239 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229268 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229221 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229951 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229976 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.229988 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230718 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230747 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230756 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230903 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230927 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230939 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230976 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.230998 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.231008 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.385306 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:23:41 crc kubenswrapper[4695]: I1126 13:23:41.775683 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.236082 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"86f00cf859dd7770416e39cc1fb4dd07697572ab1bb23bcbf4fe605ee1286303"} Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.236159 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"df1f2bf5949e83a2deebf90e08e56dc63d93b4a3521b246036627216abbc6a6b"} Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.236183 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a612d4de026cb4d82be39949a238a35f286234b21b03e1a8400f7dfc23989da4"} Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.236165 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.236211 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.236230 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c48a1733d0598496d18edd8345db071466d28fa83e7bf4531080a9dbd10a605e"} Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.236185 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.237212 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.237239 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.237247 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.237244 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.237413 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:42 crc kubenswrapper[4695]: I1126 13:23:42.237434 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.246909 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d08d46f528704409795ae69ca0fb5667276ae061e00ecfc74a3c5cf49187d0f8"} Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.246946 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.246988 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.248318 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.248407 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.248430 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.248424 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.248614 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.248641 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.449373 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.449610 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.450940 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.450996 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.451018 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.548317 4695 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.557622 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.559019 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.559078 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.559095 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:43 crc kubenswrapper[4695]: I1126 13:23:43.559130 4695 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.249787 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.251218 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.251278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.251311 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.529590 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.529768 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.531100 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.531144 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.531159 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:44 crc kubenswrapper[4695]: I1126 13:23:44.535913 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.253142 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.254229 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.254288 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.254306 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.490377 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.746692 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.746986 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.748795 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.748857 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.748868 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.778299 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.778621 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.780147 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.780205 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:45 crc kubenswrapper[4695]: I1126 13:23:45.780216 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:46 crc kubenswrapper[4695]: I1126 13:23:46.255386 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:46 crc kubenswrapper[4695]: I1126 13:23:46.256635 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:46 crc kubenswrapper[4695]: I1126 13:23:46.256677 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:46 crc kubenswrapper[4695]: I1126 13:23:46.256695 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:47 crc kubenswrapper[4695]: E1126 13:23:47.251880 4695 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 26 13:23:48 crc kubenswrapper[4695]: I1126 13:23:48.491101 4695 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 26 13:23:48 crc kubenswrapper[4695]: I1126 13:23:48.491191 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.437331 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.437495 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.438582 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.438623 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.438632 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.612092 4695 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.612148 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 26 13:23:50 crc kubenswrapper[4695]: W1126 13:23:50.691564 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.691656 4695 trace.go:236] Trace[610186154]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:23:40.690) (total time: 10001ms): Nov 26 13:23:50 crc kubenswrapper[4695]: Trace[610186154]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:23:50.691) Nov 26 13:23:50 crc kubenswrapper[4695]: Trace[610186154]: [10.001121352s] [10.001121352s] END Nov 26 13:23:50 crc kubenswrapper[4695]: E1126 13:23:50.691681 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 26 13:23:50 crc kubenswrapper[4695]: W1126 13:23:50.942257 4695 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Nov 26 13:23:50 crc kubenswrapper[4695]: I1126 13:23:50.942433 4695 trace.go:236] Trace[2390740]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:23:40.940) (total time: 10001ms): Nov 26 13:23:50 crc kubenswrapper[4695]: Trace[2390740]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (13:23:50.942) Nov 26 13:23:50 crc kubenswrapper[4695]: Trace[2390740]: [10.001994171s] [10.001994171s] END Nov 26 13:23:50 crc kubenswrapper[4695]: E1126 13:23:50.942465 4695 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.109787 4695 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.260265 4695 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.260339 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.264924 4695 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.264975 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.702214 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.702504 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.704185 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.704259 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.704283 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.760062 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.784567 4695 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]log ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]etcd ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/generic-apiserver-start-informers ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/priority-and-fairness-filter ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-apiextensions-informers ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-apiextensions-controllers ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/crd-informer-synced ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-system-namespaces-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 26 13:23:51 crc kubenswrapper[4695]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 26 13:23:51 crc kubenswrapper[4695]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/bootstrap-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/start-kube-aggregator-informers ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/apiservice-registration-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/apiservice-discovery-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]autoregister-completion ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/apiservice-openapi-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 26 13:23:51 crc kubenswrapper[4695]: livez check failed Nov 26 13:23:51 crc kubenswrapper[4695]: I1126 13:23:51.784654 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:23:52 crc kubenswrapper[4695]: I1126 13:23:52.268627 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:52 crc kubenswrapper[4695]: I1126 13:23:52.269967 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:52 crc kubenswrapper[4695]: I1126 13:23:52.270035 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:52 crc kubenswrapper[4695]: I1126 13:23:52.270046 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:52 crc kubenswrapper[4695]: I1126 13:23:52.294410 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 26 13:23:53 crc kubenswrapper[4695]: I1126 13:23:53.271292 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:23:53 crc kubenswrapper[4695]: I1126 13:23:53.272552 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:23:53 crc kubenswrapper[4695]: I1126 13:23:53.272618 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:23:53 crc kubenswrapper[4695]: I1126 13:23:53.272642 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:23:54 crc kubenswrapper[4695]: I1126 13:23:54.457405 4695 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 26 13:23:56 crc kubenswrapper[4695]: E1126 13:23:56.250903 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.252336 4695 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.253763 4695 trace.go:236] Trace[1440101903]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:23:44.405) (total time: 11848ms): Nov 26 13:23:56 crc kubenswrapper[4695]: Trace[1440101903]: ---"Objects listed" error: 11848ms (13:23:56.253) Nov 26 13:23:56 crc kubenswrapper[4695]: Trace[1440101903]: [11.848530011s] [11.848530011s] END Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.253790 4695 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.255090 4695 trace.go:236] Trace[235224163]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (26-Nov-2025 13:23:41.418) (total time: 14836ms): Nov 26 13:23:56 crc kubenswrapper[4695]: Trace[235224163]: ---"Objects listed" error: 14836ms (13:23:56.255) Nov 26 13:23:56 crc kubenswrapper[4695]: Trace[235224163]: [14.836852857s] [14.836852857s] END Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.255113 4695 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 26 13:23:56 crc kubenswrapper[4695]: E1126 13:23:56.256088 4695 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.256820 4695 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.274077 4695 csr.go:261] certificate signing request csr-8987c is approved, waiting to be issued Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.284970 4695 csr.go:257] certificate signing request csr-8987c is issued Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.330358 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.340764 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.350216 4695 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.492408 4695 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.492568 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.782716 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.783307 4695 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.783399 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.788519 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:56 crc kubenswrapper[4695]: I1126 13:23:56.943011 4695 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 26 13:23:56 crc kubenswrapper[4695]: W1126 13:23:56.943526 4695 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 26 13:23:56 crc kubenswrapper[4695]: W1126 13:23:56.943572 4695 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 26 13:23:56 crc kubenswrapper[4695]: W1126 13:23:56.945132 4695 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.094559 4695 apiserver.go:52] "Watching apiserver" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.099251 4695 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.099967 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-r5n2z","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-pslgh","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/machine-config-daemon-mmgd2","openshift-multus/multus-hgtpx","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-node-qc7jt"] Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.100487 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.100687 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.100791 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.100883 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.100818 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.100814 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.101420 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.101505 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.101625 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.101479 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.101697 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.101712 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.101799 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.101851 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.102980 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.103959 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.104210 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.104392 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.104486 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.104626 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.104675 4695 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.104845 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.104882 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.105030 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.105287 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.105331 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.105690 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.108124 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.108147 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.108181 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.108470 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.115835 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.115988 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.118638 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.118729 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.118749 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.118959 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.119808 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.120251 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.120572 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.120769 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.121058 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.121159 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.121987 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.122057 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.124444 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.130401 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.143406 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.162114 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164315 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164373 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164400 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164427 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164453 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164476 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164497 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164518 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164539 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164560 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164582 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164605 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164626 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164647 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164670 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164693 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164717 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164741 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164763 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164785 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164807 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164830 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164853 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164876 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164903 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164926 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164953 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.164987 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165012 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165037 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165060 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165103 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165126 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165155 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165179 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165201 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165221 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165243 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165268 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165268 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165292 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165316 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165338 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165376 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165400 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165421 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165443 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165468 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165491 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165513 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165537 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165560 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165584 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165536 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165691 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165720 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165748 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165760 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165770 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165829 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165867 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165904 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165935 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165825 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165950 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.165969 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166069 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166080 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166120 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166148 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166178 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166201 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166229 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166246 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166258 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166316 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166398 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166438 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166456 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166328 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166482 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166528 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166557 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166580 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166604 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166545 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166632 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166693 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166736 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166760 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166822 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166858 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166882 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166907 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166927 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166938 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166967 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166970 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.166998 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167028 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167056 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167083 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167107 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167132 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167165 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167190 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167202 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167215 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167324 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167365 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167456 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167492 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167539 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167558 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167571 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167601 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167630 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167657 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167681 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167704 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167728 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167752 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167774 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167797 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167819 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167840 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167863 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167886 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167838 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167914 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167940 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167950 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167963 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.167994 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168026 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168032 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168055 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168083 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168108 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168129 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168155 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168187 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168219 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168252 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168280 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168295 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168326 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168389 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168424 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168456 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168489 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168518 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168556 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168590 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168625 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168657 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168663 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168689 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168727 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168763 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168793 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168829 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168861 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168895 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168927 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168959 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168995 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169030 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169066 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169104 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169140 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169171 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169205 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169246 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169283 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169320 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169380 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169419 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169454 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169492 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169525 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169558 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169591 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169624 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169660 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169695 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169730 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169764 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169801 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169835 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169869 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169909 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169949 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169987 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170023 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170061 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170094 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170127 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170163 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170198 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170235 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170278 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170317 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170382 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170432 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170465 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170503 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170539 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170576 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170623 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170667 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170704 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170742 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170805 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170845 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170951 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171009 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171058 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-cni-bin\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171095 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-etc-kubernetes\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171135 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-var-lib-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171172 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42wpj\" (UniqueName: \"kubernetes.io/projected/b0bd1ae7-27db-479a-9f8e-256980eef3be-kube-api-access-42wpj\") pod \"node-resolver-pslgh\" (UID: \"b0bd1ae7-27db-479a-9f8e-256980eef3be\") " pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171214 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171255 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171292 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-system-cni-dir\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171332 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b0bd1ae7-27db-479a-9f8e-256980eef3be-hosts-file\") pod \"node-resolver-pslgh\" (UID: \"b0bd1ae7-27db-479a-9f8e-256980eef3be\") " pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171396 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171435 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-cni-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171468 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-config\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171513 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171547 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-os-release\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171584 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-multus-certs\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171618 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cni-binary-copy\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171651 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-hostroot\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171721 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-ovn\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171718 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171765 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171856 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-netd\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171902 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171943 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-system-cni-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172001 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-os-release\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172038 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172079 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqsvv\" (UniqueName: \"kubernetes.io/projected/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-kube-api-access-mqsvv\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174639 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174702 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx2zx\" (UniqueName: \"kubernetes.io/projected/133aab88-6958-4575-aefd-c4675266edd5-kube-api-access-hx2zx\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174751 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-systemd-units\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174814 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-node-log\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174893 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-log-socket\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174939 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-script-lib\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174975 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175001 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175030 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175055 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-slash\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175084 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-netns\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175122 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw8dj\" (UniqueName: \"kubernetes.io/projected/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-kube-api-access-sw8dj\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175156 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-socket-dir-parent\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175190 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-cni-multus\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175226 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-rootfs\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175256 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-proxy-tls\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175289 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175320 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175379 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/133aab88-6958-4575-aefd-c4675266edd5-cni-binary-copy\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175413 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/133aab88-6958-4575-aefd-c4675266edd5-multus-daemon-config\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175447 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cnibin\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175490 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-bin\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175527 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-env-overrides\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175566 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175601 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175633 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-cnibin\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175693 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-netns\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175733 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovn-node-metrics-cert\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175765 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-kubelet\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175798 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-etc-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175832 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175867 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175891 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-k8s-cni-cncf-io\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175918 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-kubelet\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175945 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhzf\" (UniqueName: \"kubernetes.io/projected/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-kube-api-access-pbhzf\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175987 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176021 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-conf-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176043 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-systemd\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176112 4695 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176129 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176145 4695 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176160 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176178 4695 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176197 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176214 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176232 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176250 4695 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176267 4695 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176283 4695 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176299 4695 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176317 4695 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176330 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176368 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176383 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176398 4695 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176414 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181284 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.186373 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188704 4695 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.190540 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.168922 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169022 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169292 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169888 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.169990 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170272 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170557 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.192023 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.170957 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171075 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171102 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171237 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171533 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171593 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.171977 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172064 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172323 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172594 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.192215 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172636 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172697 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172815 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172935 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.172999 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173186 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173196 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173248 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.192329 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173380 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173452 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173598 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173722 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.173748 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174129 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174418 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.174502 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175147 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175809 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.175951 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176114 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176221 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.192475 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176390 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176594 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.176959 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.177007 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.177034 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.177214 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.192531 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.177304 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.177890 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.177957 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.178027 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.178406 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:23:57.678016782 +0000 UTC m=+21.313841964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.178994 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.179113 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.179412 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.179436 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.179470 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.179706 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.179983 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.180119 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.180135 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.180169 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.180368 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.180641 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181151 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181495 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181312 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181643 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181814 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181882 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.181887 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182151 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182195 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182230 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182431 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182563 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182630 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182839 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.182992 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.183024 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.183188 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.183246 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.183386 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.183521 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.183248 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.184135 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.184259 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.184467 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.184856 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.185115 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.185202 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.185465 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.185644 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.185663 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.185781 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.186424 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.186819 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187239 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187336 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187374 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187440 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187472 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187711 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187817 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187817 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187864 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.187882 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188110 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188216 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188236 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188374 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188468 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.188551 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188810 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188829 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188926 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.188988 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.189002 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.189309 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.189554 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.189547 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.189743 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.190232 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.190285 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.190380 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.190684 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.190621 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.191608 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.191739 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.191827 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.191958 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.192944 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.193246 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:57.693218152 +0000 UTC m=+21.329043364 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.193161 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.193281 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:57.693269893 +0000 UTC m=+21.329095105 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.193371 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.193690 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.194451 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.196632 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.198543 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.198430 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.198559 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.198736 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.200180 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.200256 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.200495 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.200623 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.201147 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.202270 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.202542 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.203217 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.204674 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.207787 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.207819 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.207837 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.208676 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:57.708646328 +0000 UTC m=+21.344471410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.210274 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.215066 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.215100 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.219513 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.219518 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.219743 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.222190 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.222318 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.222531 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.222938 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.223499 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.223528 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.223571 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.223601 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.224608 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.224723 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.228224 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.227409 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.229231 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.229456 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.229600 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.230272 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.231182 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.231753 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.232317 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.232442 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.234053 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.235228 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.235315 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.237785 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.237815 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.237851 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.237935 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:57.737914842 +0000 UTC m=+21.373739924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.242765 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.243280 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.244705 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.254628 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.255898 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.261310 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.265158 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.266917 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.272985 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276793 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-system-cni-dir\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276822 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42wpj\" (UniqueName: \"kubernetes.io/projected/b0bd1ae7-27db-479a-9f8e-256980eef3be-kube-api-access-42wpj\") pod \"node-resolver-pslgh\" (UID: \"b0bd1ae7-27db-479a-9f8e-256980eef3be\") " pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276851 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276872 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b0bd1ae7-27db-479a-9f8e-256980eef3be-hosts-file\") pod \"node-resolver-pslgh\" (UID: \"b0bd1ae7-27db-479a-9f8e-256980eef3be\") " pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276888 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-cni-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276882 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-system-cni-dir\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276905 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-config\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276926 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-os-release\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276950 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-multus-certs\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276962 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276973 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cni-binary-copy\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.276990 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-ovn\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277004 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277023 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-hostroot\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277100 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/b0bd1ae7-27db-479a-9f8e-256980eef3be-hosts-file\") pod \"node-resolver-pslgh\" (UID: \"b0bd1ae7-27db-479a-9f8e-256980eef3be\") " pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277159 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-ovn\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277196 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277259 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-os-release\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277280 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-cni-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277306 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-multus-certs\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277390 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-hostroot\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277631 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cni-binary-copy\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277690 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-system-cni-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277709 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-os-release\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277811 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277843 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-netd\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277864 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx2zx\" (UniqueName: \"kubernetes.io/projected/133aab88-6958-4575-aefd-c4675266edd5-kube-api-access-hx2zx\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277880 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqsvv\" (UniqueName: \"kubernetes.io/projected/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-kube-api-access-mqsvv\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277897 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-systemd-units\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277911 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-node-log\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277926 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-log-socket\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277945 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-script-lib\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277965 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277993 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-slash\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278009 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-netns\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278050 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw8dj\" (UniqueName: \"kubernetes.io/projected/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-kube-api-access-sw8dj\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277757 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-system-cni-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.277790 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-os-release\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278253 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-config\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278289 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-netd\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278326 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278357 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-proxy-tls\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278373 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278406 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278419 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/133aab88-6958-4575-aefd-c4675266edd5-cni-binary-copy\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278452 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-socket-dir-parent\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278474 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-cni-multus\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278493 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-rootfs\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278506 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-bin\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278520 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-env-overrides\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278535 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/133aab88-6958-4575-aefd-c4675266edd5-multus-daemon-config\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278552 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cnibin\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278575 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-cnibin\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278588 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-netns\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278601 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovn-node-metrics-cert\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278617 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-k8s-cni-cncf-io\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278631 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-kubelet\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278668 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbhzf\" (UniqueName: \"kubernetes.io/projected/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-kube-api-access-pbhzf\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278683 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-kubelet\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278722 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-etc-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278737 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278751 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-conf-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278766 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-systemd\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278790 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-cni-bin\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278806 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-etc-kubernetes\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278823 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-var-lib-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278846 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-script-lib\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278894 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-node-log\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278906 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278944 4695 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278958 4695 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278971 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278984 4695 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278999 4695 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279005 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-tuning-conf-dir\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279011 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279024 4695 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279041 4695 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279047 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-log-socket\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279053 4695 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279088 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279130 4695 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279146 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279159 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279170 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279205 4695 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279220 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279232 4695 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279244 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279256 4695 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279292 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279294 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-slash\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279308 4695 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279321 4695 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279333 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279335 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-netns\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279398 4695 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279416 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279429 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279442 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279455 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279478 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279493 4695 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279506 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279519 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279531 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279543 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279552 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/133aab88-6958-4575-aefd-c4675266edd5-multus-daemon-config\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279556 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279589 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279601 4695 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279613 4695 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279624 4695 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279636 4695 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279647 4695 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279660 4695 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279672 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279683 4695 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279694 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279705 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279719 4695 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279733 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279746 4695 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279757 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279768 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279779 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279789 4695 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279801 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279814 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279826 4695 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279838 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279851 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279861 4695 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279871 4695 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279882 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279893 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279906 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279918 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279930 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279941 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279952 4695 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279952 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cnibin\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279963 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279983 4695 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279999 4695 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280011 4695 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280024 4695 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280038 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280051 4695 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280063 4695 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280075 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280088 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280101 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280114 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280128 4695 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280140 4695 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280155 4695 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280167 4695 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280179 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280191 4695 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280204 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280215 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280227 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280239 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280249 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280261 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280272 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280285 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280298 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280310 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280323 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280336 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280375 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280389 4695 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280424 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280436 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280448 4695 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280464 4695 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280477 4695 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280489 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280501 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280513 4695 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280526 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280539 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280551 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280565 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280577 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280589 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280600 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280611 4695 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280623 4695 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280634 4695 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280648 4695 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280659 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280672 4695 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280685 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280697 4695 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280709 4695 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280722 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280732 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280743 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280754 4695 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280764 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280774 4695 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280785 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280796 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280807 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280818 4695 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280829 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280839 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280850 4695 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280861 4695 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280872 4695 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280884 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280895 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280905 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280919 4695 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280931 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280943 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280954 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280965 4695 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280977 4695 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280988 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.280999 4695 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281010 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281023 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281036 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281049 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281061 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281073 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281085 4695 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281097 4695 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281108 4695 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281120 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281134 4695 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281146 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281157 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281168 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281180 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281191 4695 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281204 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281215 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281228 4695 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281239 4695 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281250 4695 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281262 4695 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281274 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281287 4695 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281298 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281309 4695 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281585 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-cnibin\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.281618 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-netns\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.282529 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-rootfs\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278417 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-systemd-units\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.282666 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-socket-dir-parent\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.282760 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-cni-multus\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.283128 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.283564 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-env-overrides\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.283617 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-bin\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.279088 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.283938 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284003 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-multus-conf-dir\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284060 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-kubelet\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284078 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-cni-bin\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284092 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-etc-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284118 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-systemd\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284159 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-etc-kubernetes\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284186 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-var-lib-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284214 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-run-k8s-cni-cncf-io\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284329 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/133aab88-6958-4575-aefd-c4675266edd5-host-var-lib-kubelet\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.284586 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.286113 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-mcd-auth-proxy-config\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.286176 4695 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-26 13:18:56 +0000 UTC, rotation deadline is 2026-09-27 17:48:59.437045109 +0000 UTC Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.286209 4695 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7324h25m2.150837663s for next certificate rotation Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.288673 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/133aab88-6958-4575-aefd-c4675266edd5-cni-binary-copy\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.278929 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-openvswitch\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.289323 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845" exitCode=255 Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.290734 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845"} Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.292731 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovn-node-metrics-cert\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.294866 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqsvv\" (UniqueName: \"kubernetes.io/projected/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-kube-api-access-mqsvv\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.295441 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx2zx\" (UniqueName: \"kubernetes.io/projected/133aab88-6958-4575-aefd-c4675266edd5-kube-api-access-hx2zx\") pod \"multus-hgtpx\" (UID: \"133aab88-6958-4575-aefd-c4675266edd5\") " pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.299604 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42wpj\" (UniqueName: \"kubernetes.io/projected/b0bd1ae7-27db-479a-9f8e-256980eef3be-kube-api-access-42wpj\") pod \"node-resolver-pslgh\" (UID: \"b0bd1ae7-27db-479a-9f8e-256980eef3be\") " pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.299730 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/73cbd5f2-751e-49c2-b804-e81b9ca46cd4-proxy-tls\") pod \"machine-config-daemon-mmgd2\" (UID: \"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\") " pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.302651 4695 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.303007 4695 scope.go:117] "RemoveContainer" containerID="208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.303477 4695 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.306858 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.310024 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw8dj\" (UniqueName: \"kubernetes.io/projected/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-kube-api-access-sw8dj\") pod \"ovnkube-node-qc7jt\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.317849 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbhzf\" (UniqueName: \"kubernetes.io/projected/c4272f55-b840-43d1-bae3-5f3fa57b1ec6-kube-api-access-pbhzf\") pod \"multus-additional-cni-plugins-r5n2z\" (UID: \"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\") " pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.318851 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.338258 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.349422 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.359652 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.367986 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.376905 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.385352 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.399395 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.412805 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.419238 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.420909 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.429064 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.435283 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.437924 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.448273 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:23:57 crc kubenswrapper[4695]: W1126 13:23:57.448399 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-aaedb3f3a557a65ec1a3296949db8135e0a1b20124ff63325396c88317f385fa WatchSource:0}: Error finding container aaedb3f3a557a65ec1a3296949db8135e0a1b20124ff63325396c88317f385fa: Status 404 returned error can't find the container with id aaedb3f3a557a65ec1a3296949db8135e0a1b20124ff63325396c88317f385fa Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.451187 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.461276 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.461979 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: W1126 13:23:57.465076 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73cbd5f2_751e_49c2_b804_e81b9ca46cd4.slice/crio-a5e5f78d9ec1d02cefdaad7962abd4d5c9a882e9614c511b7c8daf4dc5f7ef5f WatchSource:0}: Error finding container a5e5f78d9ec1d02cefdaad7962abd4d5c9a882e9614c511b7c8daf4dc5f7ef5f: Status 404 returned error can't find the container with id a5e5f78d9ec1d02cefdaad7962abd4d5c9a882e9614c511b7c8daf4dc5f7ef5f Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.473020 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.476421 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: W1126 13:23:57.483339 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ddc0e32618b823796862928cd09bd7d70b193fa131b9ed57e4f7799fbf56a150 WatchSource:0}: Error finding container ddc0e32618b823796862928cd09bd7d70b193fa131b9ed57e4f7799fbf56a150: Status 404 returned error can't find the container with id ddc0e32618b823796862928cd09bd7d70b193fa131b9ed57e4f7799fbf56a150 Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.487326 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pslgh" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.488441 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.499474 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.511475 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.513619 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-hgtpx" Nov 26 13:23:57 crc kubenswrapper[4695]: W1126 13:23:57.523437 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0bd1ae7_27db_479a_9f8e_256980eef3be.slice/crio-d11e548e9418d00d463c0c1b6397ff78209ed806008e26acf752ecd6f1292d7b WatchSource:0}: Error finding container d11e548e9418d00d463c0c1b6397ff78209ed806008e26acf752ecd6f1292d7b: Status 404 returned error can't find the container with id d11e548e9418d00d463c0c1b6397ff78209ed806008e26acf752ecd6f1292d7b Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.524378 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.534338 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.545165 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.556098 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.572389 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: W1126 13:23:57.585714 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fa56d8f_ad6a_4761_ad93_58a109b0a9a3.slice/crio-1659ff47a28f53355478b23d66dc31f0de1628d2a685a954220599019663989c WatchSource:0}: Error finding container 1659ff47a28f53355478b23d66dc31f0de1628d2a685a954220599019663989c: Status 404 returned error can't find the container with id 1659ff47a28f53355478b23d66dc31f0de1628d2a685a954220599019663989c Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.585887 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.598142 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.634902 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.660677 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.686709 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.686907 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:23:58.686890048 +0000 UTC m=+22.322715130 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.689524 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.710582 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.795140 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.795726 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.795763 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:57 crc kubenswrapper[4695]: I1126 13:23:57.795787 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.795474 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796010 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:58.795992671 +0000 UTC m=+22.431817753 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796093 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796123 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796151 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796177 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:58.796169886 +0000 UTC m=+22.431994968 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796234 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796257 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:58.796251109 +0000 UTC m=+22.432076191 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.795937 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796283 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796316 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:57 crc kubenswrapper[4695]: E1126 13:23:57.796335 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:23:58.796329301 +0000 UTC m=+22.432154383 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.293088 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.293156 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"2777f7b1a2cda1ef5fe52b3189394c68e8d21768e69302c47786eb4cccb69f3a"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.294222 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pslgh" event={"ID":"b0bd1ae7-27db-479a-9f8e-256980eef3be","Type":"ContainerStarted","Data":"a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.294263 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pslgh" event={"ID":"b0bd1ae7-27db-479a-9f8e-256980eef3be","Type":"ContainerStarted","Data":"d11e548e9418d00d463c0c1b6397ff78209ed806008e26acf752ecd6f1292d7b"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.295377 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.295422 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.295433 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"aaedb3f3a557a65ec1a3296949db8135e0a1b20124ff63325396c88317f385fa"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.296243 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3" exitCode=0 Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.296301 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.296320 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"1659ff47a28f53355478b23d66dc31f0de1628d2a685a954220599019663989c"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.298452 4695 generic.go:334] "Generic (PLEG): container finished" podID="c4272f55-b840-43d1-bae3-5f3fa57b1ec6" containerID="a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f" exitCode=0 Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.298516 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerDied","Data":"a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.298555 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerStarted","Data":"aecb48e4ea724f5ccebf307ad6dbd6118df0307128cdc79afe6344098cc2e25e"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.299465 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ddc0e32618b823796862928cd09bd7d70b193fa131b9ed57e4f7799fbf56a150"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.301177 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.301221 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.301237 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"a5e5f78d9ec1d02cefdaad7962abd4d5c9a882e9614c511b7c8daf4dc5f7ef5f"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.302651 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.305191 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.305552 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.306556 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.306783 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerStarted","Data":"92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.306810 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerStarted","Data":"cc24faad7655ab772c152d70e9109017043d224d53efcebcc9b8f89df1b56087"} Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.319237 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.330032 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.339670 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.349622 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.360973 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.376555 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.391936 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.407213 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.424508 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.433237 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.444924 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.457881 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.481291 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.494469 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.508624 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.524737 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.539652 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.553606 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.565478 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.577196 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.587234 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.600312 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.613918 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.644661 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.677643 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:58Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.703581 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.704516 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:24:00.704477974 +0000 UTC m=+24.340303066 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.807122 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.807184 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.807224 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:58 crc kubenswrapper[4695]: I1126 13:23:58.807250 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807324 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807360 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807368 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807385 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807430 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:00.807410212 +0000 UTC m=+24.443235294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807430 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807450 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:00.807441463 +0000 UTC m=+24.443266545 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807447 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807481 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807493 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807494 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:00.807484074 +0000 UTC m=+24.443309156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:23:58 crc kubenswrapper[4695]: E1126 13:23:58.807571 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:00.807551536 +0000 UTC m=+24.443376618 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.161659 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:23:59 crc kubenswrapper[4695]: E1126 13:23:59.162001 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.162290 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:23:59 crc kubenswrapper[4695]: E1126 13:23:59.162379 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.162428 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:23:59 crc kubenswrapper[4695]: E1126 13:23:59.162469 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.165667 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.166228 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.167077 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.167808 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.168479 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.169025 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.170676 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.171216 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.172399 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.172900 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.173867 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.174591 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.175093 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.175966 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.176551 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.177480 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.178031 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.178465 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.179693 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.180259 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.181517 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.182598 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.183183 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.184522 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.185045 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.186496 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.187508 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.189400 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.190040 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.190810 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.191294 4695 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.191408 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.192749 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.193294 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.193795 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.194917 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.195582 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.196103 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.196793 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.198336 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.199063 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.199795 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.200536 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.201843 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.202445 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.203143 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.203827 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.204816 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.212554 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.213447 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.214165 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.215491 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.216000 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.313561 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3"} Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.313608 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969"} Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.313623 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d"} Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.313635 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b"} Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.313645 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d"} Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.315961 4695 generic.go:334] "Generic (PLEG): container finished" podID="c4272f55-b840-43d1-bae3-5f3fa57b1ec6" containerID="f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92" exitCode=0 Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.316048 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerDied","Data":"f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92"} Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.333645 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.345988 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.371263 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.392670 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.405675 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.423731 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.442492 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.456825 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.469152 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.482581 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.498271 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.510757 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:23:59 crc kubenswrapper[4695]: I1126 13:23:59.525307 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:23:59Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.327299 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16"} Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.329249 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9"} Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.332375 4695 generic.go:334] "Generic (PLEG): container finished" podID="c4272f55-b840-43d1-bae3-5f3fa57b1ec6" containerID="036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213" exitCode=0 Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.332415 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerDied","Data":"036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213"} Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.358202 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.383580 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.403851 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.419188 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.440581 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.464117 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.480739 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.497005 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.517533 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.534538 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.552300 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.569849 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.592781 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.607038 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.622705 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.636235 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.651477 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.669864 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.680363 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.697474 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.708548 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.724196 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.724408 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:24:04.724377695 +0000 UTC m=+28.360202777 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.725921 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.738744 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.754101 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.769729 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.783530 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:00Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.825022 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.825077 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.825103 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:00 crc kubenswrapper[4695]: I1126 13:24:00.825127 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825191 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825214 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825235 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825249 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825255 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:04.825233247 +0000 UTC m=+28.461058329 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825283 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:04.825273828 +0000 UTC m=+28.461098910 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825314 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825404 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825440 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:04.825410272 +0000 UTC m=+28.461235444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825452 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825466 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:00 crc kubenswrapper[4695]: E1126 13:24:00.825534 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:04.825506966 +0000 UTC m=+28.461332048 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.161750 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.161804 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.161831 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:01 crc kubenswrapper[4695]: E1126 13:24:01.161935 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:01 crc kubenswrapper[4695]: E1126 13:24:01.162078 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:01 crc kubenswrapper[4695]: E1126 13:24:01.162331 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.240892 4695 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.343828 4695 generic.go:334] "Generic (PLEG): container finished" podID="c4272f55-b840-43d1-bae3-5f3fa57b1ec6" containerID="c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7" exitCode=0 Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.344676 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerDied","Data":"c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7"} Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.366862 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.383176 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.409863 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.426671 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.442692 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.461293 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.496088 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.521263 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.543155 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.558250 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.573373 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.591623 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:01 crc kubenswrapper[4695]: I1126 13:24:01.610673 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:01Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.358543 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46"} Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.362698 4695 generic.go:334] "Generic (PLEG): container finished" podID="c4272f55-b840-43d1-bae3-5f3fa57b1ec6" containerID="f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f" exitCode=0 Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.362749 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerDied","Data":"f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f"} Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.380260 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.402264 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.419402 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.431520 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.451446 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.474319 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.493152 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.509220 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.523583 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.541283 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.555048 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.572320 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.585476 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.656763 4695 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.659166 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.659208 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.659221 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.659374 4695 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.669192 4695 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.669590 4695 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.670851 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.670896 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.670909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.670927 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.670941 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:02 crc kubenswrapper[4695]: E1126 13:24:02.689921 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.692977 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.693020 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.693034 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.693056 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.693071 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:02 crc kubenswrapper[4695]: E1126 13:24:02.707987 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.711176 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.711220 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.711235 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.711257 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.711272 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:02 crc kubenswrapper[4695]: E1126 13:24:02.727808 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.732340 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.732437 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.732452 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.732471 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.732484 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:02 crc kubenswrapper[4695]: E1126 13:24:02.746651 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.751713 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.751840 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.751858 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.751882 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.751899 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:02 crc kubenswrapper[4695]: E1126 13:24:02.774229 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:02Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:02 crc kubenswrapper[4695]: E1126 13:24:02.774386 4695 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.776409 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.776449 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.776458 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.776474 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.776483 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.879383 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.879492 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.879517 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.879591 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.879614 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.983226 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.983306 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.983334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.983399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:02 crc kubenswrapper[4695]: I1126 13:24:02.983423 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:02Z","lastTransitionTime":"2025-11-26T13:24:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.086849 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.086933 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.086958 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.086987 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.087007 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.135012 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-x9bgt"] Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.135711 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.138058 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.139217 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.139466 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.139782 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.159297 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.162456 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:03 crc kubenswrapper[4695]: E1126 13:24:03.162763 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.163057 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.163277 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:03 crc kubenswrapper[4695]: E1126 13:24:03.163336 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:03 crc kubenswrapper[4695]: E1126 13:24:03.163461 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.182574 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.189749 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.189776 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.189787 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.189803 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.189814 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.196840 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.219608 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.236516 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.248278 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36bda2fb-93f6-4855-8099-a24645fa17e2-host\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.248334 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/36bda2fb-93f6-4855-8099-a24645fa17e2-serviceca\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.248401 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8b4\" (UniqueName: \"kubernetes.io/projected/36bda2fb-93f6-4855-8099-a24645fa17e2-kube-api-access-mw8b4\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.251845 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.267841 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.283457 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.292856 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.292900 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.292910 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.292926 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.292948 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.299294 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.315672 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.333279 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.349210 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.349464 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36bda2fb-93f6-4855-8099-a24645fa17e2-host\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.349509 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/36bda2fb-93f6-4855-8099-a24645fa17e2-serviceca\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.349550 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw8b4\" (UniqueName: \"kubernetes.io/projected/36bda2fb-93f6-4855-8099-a24645fa17e2-kube-api-access-mw8b4\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.349617 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/36bda2fb-93f6-4855-8099-a24645fa17e2-host\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.350555 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/36bda2fb-93f6-4855-8099-a24645fa17e2-serviceca\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.365997 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.373658 4695 generic.go:334] "Generic (PLEG): container finished" podID="c4272f55-b840-43d1-bae3-5f3fa57b1ec6" containerID="39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93" exitCode=0 Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.373722 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerDied","Data":"39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.376262 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw8b4\" (UniqueName: \"kubernetes.io/projected/36bda2fb-93f6-4855-8099-a24645fa17e2-kube-api-access-mw8b4\") pod \"node-ca-x9bgt\" (UID: \"36bda2fb-93f6-4855-8099-a24645fa17e2\") " pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.381414 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.394977 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.396466 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.396499 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.396510 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.396526 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.396537 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.411309 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.424676 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.436548 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.449511 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.450292 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-x9bgt" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.466157 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: W1126 13:24:03.470220 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36bda2fb_93f6_4855_8099_a24645fa17e2.slice/crio-61afb1069210b4e65db6f06eb0ab2ba4575a47882549e9e867ef107d3997d1af WatchSource:0}: Error finding container 61afb1069210b4e65db6f06eb0ab2ba4575a47882549e9e867ef107d3997d1af: Status 404 returned error can't find the container with id 61afb1069210b4e65db6f06eb0ab2ba4575a47882549e9e867ef107d3997d1af Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.485924 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.499962 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.499998 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.500010 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.500027 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.500041 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.506292 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.528839 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.546637 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.565951 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.583590 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.595978 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.602297 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.602355 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.602367 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.602384 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.602395 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.614385 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:03Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.640141 4695 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.704520 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.704558 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.704568 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.704582 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.704591 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.806940 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.806975 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.806985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.807000 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.807009 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.909202 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.909249 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.909261 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.909278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:03 crc kubenswrapper[4695]: I1126 13:24:03.909292 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:03Z","lastTransitionTime":"2025-11-26T13:24:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.011173 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.011212 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.011224 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.011240 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.011252 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.114481 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.114546 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.114570 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.114600 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.114620 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.216874 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.216919 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.216935 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.216955 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.216970 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.319947 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.319989 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.319999 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.320015 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.320026 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.384491 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" event={"ID":"c4272f55-b840-43d1-bae3-5f3fa57b1ec6","Type":"ContainerStarted","Data":"ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.386244 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-x9bgt" event={"ID":"36bda2fb-93f6-4855-8099-a24645fa17e2","Type":"ContainerStarted","Data":"3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.386331 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-x9bgt" event={"ID":"36bda2fb-93f6-4855-8099-a24645fa17e2","Type":"ContainerStarted","Data":"61afb1069210b4e65db6f06eb0ab2ba4575a47882549e9e867ef107d3997d1af"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.391974 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.392391 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.411059 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.422479 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.422538 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.422552 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.422574 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.422590 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.424569 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.427404 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.443268 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.453221 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.468313 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.482304 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.494824 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.525030 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.525086 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.525098 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.525116 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.525129 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.526250 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.544461 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.560869 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.577459 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.590306 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.604037 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.621009 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.627455 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.627504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.627516 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.627538 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.627551 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.636410 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.650618 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.662623 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.685935 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.701498 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.713361 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.728795 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.729869 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.729910 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.729919 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.729935 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.729944 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.742680 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.754303 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.766962 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.767157 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:24:12.76713194 +0000 UTC m=+36.402957022 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.767420 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.780783 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.799889 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.813436 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.832365 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.832406 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.832414 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.832428 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.832439 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.833295 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.867661 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.867708 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.867746 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.867782 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867869 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867901 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867908 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867967 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:12.86794198 +0000 UTC m=+36.503767062 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867978 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867987 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:12.867979872 +0000 UTC m=+36.503804954 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867992 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.868031 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:12.868021843 +0000 UTC m=+36.503846925 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.867908 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.868059 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.868069 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:04 crc kubenswrapper[4695]: E1126 13:24:04.868091 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:12.868085465 +0000 UTC m=+36.503910547 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.934896 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.934954 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.934965 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.934985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:04 crc kubenswrapper[4695]: I1126 13:24:04.934998 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:04Z","lastTransitionTime":"2025-11-26T13:24:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.037706 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.037764 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.037786 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.037807 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.037823 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.140629 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.140670 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.140681 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.140696 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.140708 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.162221 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.162298 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.162454 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:05 crc kubenswrapper[4695]: E1126 13:24:05.162441 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:05 crc kubenswrapper[4695]: E1126 13:24:05.162504 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:05 crc kubenswrapper[4695]: E1126 13:24:05.162626 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.243274 4695 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.243329 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.243492 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.243513 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.243528 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.243537 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.345310 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.345405 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.345417 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.345432 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.345441 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.394964 4695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.395560 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.416821 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.429552 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.442151 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.448300 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.448371 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.448385 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.448401 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.448412 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.455513 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.476292 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.519899 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.543751 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.550639 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.550700 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.550712 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.550728 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.550746 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.557647 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.570670 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.582922 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.593377 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.607376 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.616678 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.627227 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.638363 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:05Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.652958 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.652991 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.653000 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.653023 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.653033 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.755119 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.755188 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.755209 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.755236 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.755255 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.858673 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.858720 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.858732 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.858750 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.858762 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.961392 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.961449 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.961464 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.961482 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:05 crc kubenswrapper[4695]: I1126 13:24:05.961494 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:05Z","lastTransitionTime":"2025-11-26T13:24:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.063922 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.063957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.063967 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.063980 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.063990 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.165827 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.165881 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.165892 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.165913 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.165924 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.268023 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.268060 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.268074 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.268090 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.268101 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.370399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.370466 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.370481 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.370504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.370520 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.398650 4695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.473710 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.473759 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.473767 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.473784 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.473797 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.577687 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.577737 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.577756 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.577781 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.577802 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.681619 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.681672 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.681686 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.681706 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.681718 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.784888 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.784952 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.784966 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.784990 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.785006 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.887088 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.887122 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.887130 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.887150 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.887159 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.990253 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.990315 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.990332 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.990389 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:06 crc kubenswrapper[4695]: I1126 13:24:06.990407 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:06Z","lastTransitionTime":"2025-11-26T13:24:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.092782 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.092866 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.092896 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.092922 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.092939 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.161644 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.161771 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.161883 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:07 crc kubenswrapper[4695]: E1126 13:24:07.161896 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:07 crc kubenswrapper[4695]: E1126 13:24:07.162085 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:07 crc kubenswrapper[4695]: E1126 13:24:07.162273 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.181171 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.195576 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.195623 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.195636 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.195659 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.195672 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.201449 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.227224 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.241466 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.265333 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.281523 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.294728 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.297495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.297531 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.297541 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.297558 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.297569 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.309426 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.327780 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.345913 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.362487 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.377118 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.393106 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.400293 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.400370 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.400382 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.400398 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.400412 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.404117 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/0.log" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.408864 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893" exitCode=1 Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.408970 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.409990 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.410118 4695 scope.go:117] "RemoveContainer" containerID="aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.427927 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.444142 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.461371 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.488544 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:06Z\\\",\\\"message\\\":\\\"controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1126 13:24:06.726508 6015 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:06.728319 6015 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:24:06.728385 6015 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:24:06.728431 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:24:06.728455 6015 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:06.728464 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:24:06.728486 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:24:06.728508 6015 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:24:06.728527 6015 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:24:06.728528 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:24:06.728565 6015 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:06.728568 6015 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:24:06.728636 6015 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:06.728637 6015 factory.go:656] Stopping watch factory\\\\nI1126 13:24:06.728662 6015 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.503425 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.503790 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.503808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.503827 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.503840 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.507179 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.521634 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.533070 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.544055 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.561069 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.575313 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.590847 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.602886 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.606609 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.606639 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.606648 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.606663 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.606672 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.628013 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.638323 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.710678 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.710728 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.710745 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.710765 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.710780 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.813855 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.813921 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.813931 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.813978 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.813990 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.916845 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.916896 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.916909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.916927 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:07 crc kubenswrapper[4695]: I1126 13:24:07.916940 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:07Z","lastTransitionTime":"2025-11-26T13:24:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.020213 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.020262 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.020276 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.020296 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.020311 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.123277 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.123323 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.123335 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.123381 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.123396 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.226673 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.226714 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.226723 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.226763 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.226773 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.329078 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.329138 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.329149 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.329165 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.329177 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.413770 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/0.log" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.416440 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.416570 4695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.431051 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.431085 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.431097 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.431111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.431124 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.431271 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.443885 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.457655 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.476902 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:06Z\\\",\\\"message\\\":\\\"controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1126 13:24:06.726508 6015 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:06.728319 6015 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:24:06.728385 6015 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:24:06.728431 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:24:06.728455 6015 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:06.728464 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:24:06.728486 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:24:06.728508 6015 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:24:06.728527 6015 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:24:06.728528 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:24:06.728565 6015 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:06.728568 6015 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:24:06.728636 6015 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:06.728637 6015 factory.go:656] Stopping watch factory\\\\nI1126 13:24:06.728662 6015 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.492956 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.512173 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.532334 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.533912 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.533972 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.533985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.534003 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.534016 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.544910 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.556192 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.567878 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.586019 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.598068 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.613138 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.622490 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:08Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.636323 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.636371 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.636389 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.636409 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.636421 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.738975 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.739009 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.739017 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.739032 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.739040 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.842230 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.842269 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.842278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.842292 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.842302 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.944894 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.944969 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.944992 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.945022 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:08 crc kubenswrapper[4695]: I1126 13:24:08.945042 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:08Z","lastTransitionTime":"2025-11-26T13:24:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.048375 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.048432 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.048453 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.048492 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.048512 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.151647 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.151709 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.151728 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.151756 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.151774 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.162167 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.162270 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:09 crc kubenswrapper[4695]: E1126 13:24:09.162321 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:09 crc kubenswrapper[4695]: E1126 13:24:09.162500 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.162553 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:09 crc kubenswrapper[4695]: E1126 13:24:09.162664 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.254255 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.254298 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.254312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.254329 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.254341 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.356917 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.356974 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.356997 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.357030 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.357052 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.422916 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/1.log" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.424104 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/0.log" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.428385 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18" exitCode=1 Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.428451 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.428531 4695 scope.go:117] "RemoveContainer" containerID="aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.429567 4695 scope.go:117] "RemoveContainer" containerID="c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18" Nov 26 13:24:09 crc kubenswrapper[4695]: E1126 13:24:09.429974 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.449620 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.460116 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.460166 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.460183 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.460207 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.460225 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.467812 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.487584 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.494418 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg"] Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.494904 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.497455 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.497657 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.505283 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.513138 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd28c\" (UniqueName: \"kubernetes.io/projected/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-kube-api-access-bd28c\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.513214 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.513248 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.513287 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.526616 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.541993 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.558251 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.563805 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.563863 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.563881 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.563906 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.563924 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.574647 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.596153 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.608991 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.613947 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bd28c\" (UniqueName: \"kubernetes.io/projected/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-kube-api-access-bd28c\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.613983 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.614000 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.614022 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.614708 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-env-overrides\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.614824 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.621739 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.633237 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:06Z\\\",\\\"message\\\":\\\"controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1126 13:24:06.726508 6015 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:06.728319 6015 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:24:06.728385 6015 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:24:06.728431 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:24:06.728455 6015 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:06.728464 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:24:06.728486 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:24:06.728508 6015 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:24:06.728527 6015 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:24:06.728528 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:24:06.728565 6015 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:06.728568 6015 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:24:06.728636 6015 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:06.728637 6015 factory.go:656] Stopping watch factory\\\\nI1126 13:24:06.728662 6015 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:08Z\\\",\\\"message\\\":\\\"ostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-x9bgt openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1126 13:24:08.482962 6136 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 13:24:08.482991 6136 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.634459 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bd28c\" (UniqueName: \"kubernetes.io/projected/ffee5f31-90c6-4596-9a07-9c3aa1725cb3-kube-api-access-bd28c\") pod \"ovnkube-control-plane-749d76644c-wwcjg\" (UID: \"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.647512 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.662937 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.666593 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.666741 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.666842 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.666941 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.667051 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.674710 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.688962 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.704161 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.718245 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.738506 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aaea7ef027050e56ee89804bc6596bc074c276937e9efa1e4153387f3614f893\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:06Z\\\",\\\"message\\\":\\\"controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1126 13:24:06.726508 6015 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:06.728319 6015 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1126 13:24:06.728385 6015 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1126 13:24:06.728431 6015 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1126 13:24:06.728455 6015 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:06.728464 6015 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1126 13:24:06.728486 6015 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1126 13:24:06.728508 6015 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1126 13:24:06.728527 6015 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1126 13:24:06.728528 6015 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1126 13:24:06.728565 6015 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:06.728568 6015 handler.go:208] Removed *v1.Node event handler 2\\\\nI1126 13:24:06.728636 6015 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:06.728637 6015 factory.go:656] Stopping watch factory\\\\nI1126 13:24:06.728662 6015 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:08Z\\\",\\\"message\\\":\\\"ostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-x9bgt openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1126 13:24:08.482962 6136 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 13:24:08.482991 6136 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.753773 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.770893 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.770942 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.770959 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.770982 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.770999 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.774818 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.788050 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.809677 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.815418 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.823546 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: W1126 13:24:09.836417 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffee5f31_90c6_4596_9a07_9c3aa1725cb3.slice/crio-7bc2729e0340b172ca16d45f6922e6a64443e5f45776e366c0e4b1ab32906870 WatchSource:0}: Error finding container 7bc2729e0340b172ca16d45f6922e6a64443e5f45776e366c0e4b1ab32906870: Status 404 returned error can't find the container with id 7bc2729e0340b172ca16d45f6922e6a64443e5f45776e366c0e4b1ab32906870 Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.837843 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.855651 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.870030 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.874494 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.874567 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.874593 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.874623 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.874645 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.890582 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.903220 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.919972 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:09Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.977227 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.977288 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.977307 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.977334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:09 crc kubenswrapper[4695]: I1126 13:24:09.977421 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:09Z","lastTransitionTime":"2025-11-26T13:24:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.081126 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.081169 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.081180 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.081196 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.081206 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.184167 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.184223 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.184241 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.184267 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.184285 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.287661 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.287698 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.287708 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.287725 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.287737 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.389723 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.389758 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.389767 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.389780 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.389790 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.434875 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/1.log" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.440305 4695 scope.go:117] "RemoveContainer" containerID="c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18" Nov 26 13:24:10 crc kubenswrapper[4695]: E1126 13:24:10.440643 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.440847 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" event={"ID":"ffee5f31-90c6-4596-9a07-9c3aa1725cb3","Type":"ContainerStarted","Data":"769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.440874 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" event={"ID":"ffee5f31-90c6-4596-9a07-9c3aa1725cb3","Type":"ContainerStarted","Data":"680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.440883 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" event={"ID":"ffee5f31-90c6-4596-9a07-9c3aa1725cb3","Type":"ContainerStarted","Data":"7bc2729e0340b172ca16d45f6922e6a64443e5f45776e366c0e4b1ab32906870"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.455999 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.474077 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.491628 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.493745 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.493798 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.493816 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.493840 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.493856 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.507814 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.519820 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.536279 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.556717 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.569220 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.584574 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.597059 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.597138 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.597165 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.597196 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.597220 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.600911 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.612526 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-l9n9h"] Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.613260 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:10 crc kubenswrapper[4695]: E1126 13:24:10.613396 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.618309 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.620803 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.621263 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.621423 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q95hj\" (UniqueName: \"kubernetes.io/projected/755825f0-d565-4a02-8a54-8f9be77991d6-kube-api-access-q95hj\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.635337 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.647535 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.670608 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:08Z\\\",\\\"message\\\":\\\"ostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-x9bgt openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1126 13:24:08.482962 6136 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 13:24:08.482991 6136 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.686286 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.699247 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.700450 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.700515 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.700532 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.700556 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.700572 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.714979 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.722433 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.722574 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q95hj\" (UniqueName: \"kubernetes.io/projected/755825f0-d565-4a02-8a54-8f9be77991d6-kube-api-access-q95hj\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:10 crc kubenswrapper[4695]: E1126 13:24:10.722600 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:10 crc kubenswrapper[4695]: E1126 13:24:10.722663 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:11.222643425 +0000 UTC m=+34.858468517 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.731163 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.741847 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q95hj\" (UniqueName: \"kubernetes.io/projected/755825f0-d565-4a02-8a54-8f9be77991d6-kube-api-access-q95hj\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.747243 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.761498 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.775464 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.791533 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.803268 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.803320 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.803334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.803369 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.803384 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.804613 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.819331 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.835099 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.848776 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.861376 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.875147 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.903404 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:08Z\\\",\\\"message\\\":\\\"ostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-x9bgt openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1126 13:24:08.482962 6136 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 13:24:08.482991 6136 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.906250 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.906310 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.906334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.906396 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.906425 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:10Z","lastTransitionTime":"2025-11-26T13:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.919801 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:10 crc kubenswrapper[4695]: I1126 13:24:10.939013 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:10Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.008841 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.009186 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.009446 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.009598 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.009732 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.112676 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.113080 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.113286 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.113523 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.113656 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.161903 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.161903 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:11 crc kubenswrapper[4695]: E1126 13:24:11.162046 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:11 crc kubenswrapper[4695]: E1126 13:24:11.162116 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.161903 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:11 crc kubenswrapper[4695]: E1126 13:24:11.162192 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.217284 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.217319 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.217328 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.217367 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.217378 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.228445 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:11 crc kubenswrapper[4695]: E1126 13:24:11.228738 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:11 crc kubenswrapper[4695]: E1126 13:24:11.228844 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:12.228812246 +0000 UTC m=+35.864637368 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.320537 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.320605 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.320629 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.320657 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.320678 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.424159 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.424232 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.424255 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.424283 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.424306 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.527669 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.527720 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.527730 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.527747 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.527758 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.630802 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.630856 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.630874 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.630900 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.630916 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.734281 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.734396 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.734423 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.734447 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.734475 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.840409 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.841053 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.841116 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.841145 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.841163 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.943059 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.943094 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.943102 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.943115 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:11 crc kubenswrapper[4695]: I1126 13:24:11.943124 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:11Z","lastTransitionTime":"2025-11-26T13:24:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.045866 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.045957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.045977 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.046678 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.046764 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.149859 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.149929 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.149949 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.149974 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.149992 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.161445 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.161632 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.238889 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.239103 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.239239 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:14.239208545 +0000 UTC m=+37.875033667 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.253526 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.253600 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.253622 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.253653 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.253679 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.357391 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.357465 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.357488 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.357518 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.357539 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.460099 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.460198 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.460217 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.460271 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.460290 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.563535 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.563587 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.563605 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.563628 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.563645 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.666383 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.666457 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.666475 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.666500 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.666522 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.768653 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.768706 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.768723 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.768748 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.768768 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.844760 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.844914 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:24:28.844882254 +0000 UTC m=+52.480707376 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.871822 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.871885 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.871902 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.871926 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.871943 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.945878 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.945936 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.945970 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.946016 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946156 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946170 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946203 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946221 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946284 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:28.946260563 +0000 UTC m=+52.582085685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946222 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946424 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946180 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946502 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946465 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:28.946436899 +0000 UTC m=+52.582262011 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946574 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:28.946556903 +0000 UTC m=+52.582382025 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:12 crc kubenswrapper[4695]: E1126 13:24:12.946596 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:28.946584663 +0000 UTC m=+52.582409785 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.974188 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.974271 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.974293 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.974323 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:12 crc kubenswrapper[4695]: I1126 13:24:12.974375 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:12Z","lastTransitionTime":"2025-11-26T13:24:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.078044 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.078121 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.078148 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.078179 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.078203 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.122694 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.122734 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.122745 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.122760 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.122771 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.140528 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.146181 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.146263 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.146284 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.146305 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.146367 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.160019 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.162274 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.162296 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.162301 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.162451 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.162517 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.162586 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.164320 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.164425 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.164448 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.164488 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.164506 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.181958 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.186281 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.186330 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.186367 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.186386 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.186397 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.205198 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.209534 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.209602 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.210007 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.210051 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.210068 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.232824 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:13Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:13 crc kubenswrapper[4695]: E1126 13:24:13.232976 4695 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.234952 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.234982 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.234991 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.235004 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.235015 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.337981 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.338037 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.338055 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.338078 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.338095 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.440626 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.440963 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.441048 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.441144 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.441217 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.543588 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.543661 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.543684 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.543717 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.543739 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.646427 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.646479 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.646491 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.646509 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.646520 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.749337 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.749400 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.749411 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.749428 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.749438 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.852531 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.852591 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.852607 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.852631 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.852647 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.955953 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.956025 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.956047 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.956076 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:13 crc kubenswrapper[4695]: I1126 13:24:13.956100 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:13Z","lastTransitionTime":"2025-11-26T13:24:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.058436 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.058468 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.058477 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.058491 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.058504 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.160727 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.160824 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.160837 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.160854 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.160865 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.161659 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:14 crc kubenswrapper[4695]: E1126 13:24:14.161843 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.258131 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:14 crc kubenswrapper[4695]: E1126 13:24:14.258306 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:14 crc kubenswrapper[4695]: E1126 13:24:14.258398 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:18.258378442 +0000 UTC m=+41.894203524 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.265615 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.265714 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.265736 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.265764 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.265836 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.370018 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.370057 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.370073 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.370091 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.370102 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.472832 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.472870 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.472880 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.472894 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.472904 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.575621 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.575670 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.575685 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.575704 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.575720 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.677795 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.677871 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.677894 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.677925 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.677948 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.780873 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.780930 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.780947 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.780970 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.780986 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.883703 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.883756 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.883774 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.883800 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.883816 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.986122 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.986179 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.986195 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.986219 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:14 crc kubenswrapper[4695]: I1126 13:24:14.986240 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:14Z","lastTransitionTime":"2025-11-26T13:24:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.088287 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.088399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.088417 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.088441 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.088457 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.162171 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.162249 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:15 crc kubenswrapper[4695]: E1126 13:24:15.162380 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:15 crc kubenswrapper[4695]: E1126 13:24:15.162459 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.162548 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:15 crc kubenswrapper[4695]: E1126 13:24:15.162612 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.191441 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.191523 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.191542 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.191567 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.191584 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.294379 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.294450 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.294469 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.294493 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.294511 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.396895 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.396966 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.396984 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.397009 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.397027 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.499882 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.499933 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.499948 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.499969 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.499984 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.602398 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.602448 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.602458 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.602472 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.602481 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.705422 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.705470 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.705482 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.705499 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.705511 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.808271 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.808315 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.808326 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.808375 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.808399 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.910842 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.910911 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.910934 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.910962 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:15 crc kubenswrapper[4695]: I1126 13:24:15.910984 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:15Z","lastTransitionTime":"2025-11-26T13:24:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.013306 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.013415 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.013446 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.013473 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.013492 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.116153 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.116181 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.116190 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.116202 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.116210 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.162087 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:16 crc kubenswrapper[4695]: E1126 13:24:16.162307 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.219130 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.219195 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.219212 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.219236 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.219254 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.321822 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.321909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.321927 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.321952 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.321969 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.424738 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.424799 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.424817 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.424846 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.424863 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.527908 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.527971 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.527987 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.528011 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.528027 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.630900 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.630944 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.630958 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.630973 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.630982 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.734098 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.734158 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.734176 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.734198 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.734215 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.836243 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.836278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.836288 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.836304 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.836316 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.938731 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.938794 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.938809 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.938834 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:16 crc kubenswrapper[4695]: I1126 13:24:16.938850 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:16Z","lastTransitionTime":"2025-11-26T13:24:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.042021 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.042068 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.042079 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.042093 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.042104 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.144472 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.144565 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.144624 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.144651 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.144667 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.162088 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.162117 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:17 crc kubenswrapper[4695]: E1126 13:24:17.162270 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.162314 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:17 crc kubenswrapper[4695]: E1126 13:24:17.162466 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:17 crc kubenswrapper[4695]: E1126 13:24:17.162544 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.181631 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.198257 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.212550 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.233789 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.247144 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.247239 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.247250 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.247267 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.247277 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.249621 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.269895 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.283642 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.313527 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:08Z\\\",\\\"message\\\":\\\"ostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-x9bgt openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1126 13:24:08.482962 6136 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 13:24:08.482991 6136 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.327860 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.343105 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.349607 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.349665 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.349679 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.349700 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.349714 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.361816 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.375026 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.390835 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.408436 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.423338 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.433732 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:17Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.451618 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.451653 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.451663 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.451678 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.451690 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.555336 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.555650 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.555660 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.555675 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.555686 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.658294 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.658382 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.658400 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.658424 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.658443 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.761434 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.761508 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.761530 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.761559 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.761580 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.864268 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.864342 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.864393 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.864422 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.864440 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.967524 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.967593 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.967615 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.967642 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:17 crc kubenswrapper[4695]: I1126 13:24:17.967665 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:17Z","lastTransitionTime":"2025-11-26T13:24:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.070030 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.070093 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.070111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.070136 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.070155 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.161249 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:18 crc kubenswrapper[4695]: E1126 13:24:18.161438 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.172457 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.172511 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.172523 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.172568 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.172583 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.275040 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.275094 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.275111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.275134 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.275150 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.377290 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.377533 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.377686 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.377703 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.377771 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.377789 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: E1126 13:24:18.377839 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:18 crc kubenswrapper[4695]: E1126 13:24:18.377928 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:26.37790829 +0000 UTC m=+50.013733372 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.480469 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.480512 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.480524 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.480563 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.480575 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.583435 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.583494 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.583507 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.583533 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.583548 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.686703 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.686802 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.686823 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.686847 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.686865 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.789497 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.789578 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.789606 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.789635 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.789655 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.892748 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.892856 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.892896 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.892932 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.892956 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.995899 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.995956 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.995972 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.995998 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:18 crc kubenswrapper[4695]: I1126 13:24:18.996015 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:18Z","lastTransitionTime":"2025-11-26T13:24:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.099537 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.099622 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.099644 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.099677 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.099700 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.161253 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.161458 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.161665 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:19 crc kubenswrapper[4695]: E1126 13:24:19.161644 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:19 crc kubenswrapper[4695]: E1126 13:24:19.161793 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:19 crc kubenswrapper[4695]: E1126 13:24:19.161967 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.202985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.203047 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.203065 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.203082 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.203094 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.306155 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.306201 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.306209 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.306224 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.306233 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.408764 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.408817 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.408836 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.408858 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.408876 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.511731 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.511780 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.511796 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.511819 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.511836 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.615285 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.615320 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.615340 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.615376 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.615386 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.718090 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.718157 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.718170 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.718185 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.718194 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.820810 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.820869 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.820886 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.820909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.820928 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.924019 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.924088 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.924107 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.924130 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:19 crc kubenswrapper[4695]: I1126 13:24:19.924147 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:19Z","lastTransitionTime":"2025-11-26T13:24:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.026169 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.026231 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.026264 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.026290 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.026310 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.129293 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.129339 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.129374 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.129394 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.129404 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.162090 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:20 crc kubenswrapper[4695]: E1126 13:24:20.162240 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.231503 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.231581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.231613 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.231642 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.231663 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.334585 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.334646 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.334663 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.334686 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.334704 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.437034 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.437084 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.437103 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.437118 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.437127 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.540043 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.540097 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.540111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.540131 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.540959 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.644014 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.644099 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.644125 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.644162 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.644185 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.747080 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.747143 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.747161 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.747188 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.747207 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.850073 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.850126 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.850138 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.850154 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.850166 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.953398 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.953461 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.953482 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.953531 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:20 crc kubenswrapper[4695]: I1126 13:24:20.953558 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:20Z","lastTransitionTime":"2025-11-26T13:24:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.057012 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.057064 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.057078 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.057096 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.057110 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.159585 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.159640 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.159656 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.159679 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.159696 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.162157 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.162210 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.162214 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:21 crc kubenswrapper[4695]: E1126 13:24:21.162386 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:21 crc kubenswrapper[4695]: E1126 13:24:21.162527 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:21 crc kubenswrapper[4695]: E1126 13:24:21.162665 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.262873 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.263040 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.263063 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.263088 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.263144 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.366644 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.366695 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.366743 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.366768 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.366784 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.469901 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.470227 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.470605 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.470910 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.471195 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.574495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.574796 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.574986 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.575129 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.575250 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.678321 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.678411 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.678456 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.678482 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.678499 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.781506 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.781572 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.781586 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.781606 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.781621 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.884793 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.884865 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.884887 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.884914 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.884931 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.988097 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.988533 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.988680 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.988818 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:21 crc kubenswrapper[4695]: I1126 13:24:21.988955 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:21Z","lastTransitionTime":"2025-11-26T13:24:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.091149 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.091198 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.091217 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.091241 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.091258 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.161846 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:22 crc kubenswrapper[4695]: E1126 13:24:22.162024 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.194339 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.194423 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.194441 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.194467 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.194486 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.296367 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.296396 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.296405 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.296418 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.296429 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.399241 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.399335 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.399370 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.399392 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.399406 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.502539 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.502596 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.502611 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.502634 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.502652 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.606100 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.606151 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.606163 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.606185 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.606200 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.709693 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.709780 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.709804 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.709836 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.709860 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.813210 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.813284 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.813302 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.813322 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.813337 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.916779 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.916830 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.916843 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.916864 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:22 crc kubenswrapper[4695]: I1126 13:24:22.916877 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:22Z","lastTransitionTime":"2025-11-26T13:24:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.020389 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.020445 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.020458 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.020474 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.020485 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.124755 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.124807 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.124824 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.124845 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.124861 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.161678 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.161750 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.161696 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.161918 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.162059 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.162217 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.227136 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.227220 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.227245 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.227280 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.227298 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.258037 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.258099 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.258111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.258130 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.258142 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.277920 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.282710 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.282768 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.282785 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.282808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.282825 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.301290 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.306948 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.307044 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.307064 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.307096 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.307116 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.323704 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.329087 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.329211 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.329230 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.329256 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.329272 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.346169 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.352488 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.352566 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.352581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.352671 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.352690 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.378327 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:23Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:23Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:23 crc kubenswrapper[4695]: E1126 13:24:23.378710 4695 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.381109 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.381175 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.381198 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.381227 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.381252 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.484636 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.484683 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.484699 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.484735 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.484753 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.587010 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.587069 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.587089 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.587116 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.587137 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.689794 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.689855 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.689878 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.689905 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.689926 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.735727 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.737162 4695 scope.go:117] "RemoveContainer" containerID="c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.793492 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.793548 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.793564 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.793587 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.793606 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.895993 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.896030 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.896038 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.896053 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.896064 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.998366 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.998410 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.998419 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.998437 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:23 crc kubenswrapper[4695]: I1126 13:24:23.998447 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:23Z","lastTransitionTime":"2025-11-26T13:24:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.101868 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.101911 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.101924 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.101938 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.101947 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.161767 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:24 crc kubenswrapper[4695]: E1126 13:24:24.161894 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.203743 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.203791 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.203803 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.203817 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.203830 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.306646 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.306689 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.306703 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.306726 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.306737 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.409436 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.409491 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.409504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.409521 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.409533 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.491780 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/1.log" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.495616 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.496208 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.511531 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.512613 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.512727 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.512754 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.512786 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.512811 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.534897 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.561019 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.575570 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.589288 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.600736 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.612466 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.615247 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.615274 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.615288 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.615304 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.615316 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.624768 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.638917 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.657197 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.671209 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.702617 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:08Z\\\",\\\"message\\\":\\\"ostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-x9bgt openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1126 13:24:08.482962 6136 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 13:24:08.482991 6136 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.716151 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.717675 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.717701 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.717710 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.717722 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.717731 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.731911 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.748449 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.760191 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:24Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.820632 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.820661 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.820706 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.820721 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.820733 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.923525 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.923580 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.923598 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.923620 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:24 crc kubenswrapper[4695]: I1126 13:24:24.923636 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:24Z","lastTransitionTime":"2025-11-26T13:24:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.027120 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.027182 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.027199 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.027221 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.027238 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.130974 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.131031 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.131048 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.131073 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.131089 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.161886 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.161950 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:25 crc kubenswrapper[4695]: E1126 13:24:25.162087 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.162099 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:25 crc kubenswrapper[4695]: E1126 13:24:25.162220 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:25 crc kubenswrapper[4695]: E1126 13:24:25.162390 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.234093 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.234160 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.234184 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.234212 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.234235 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.337600 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.337661 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.337685 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.337714 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.337736 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.439833 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.439881 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.439892 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.439908 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.439921 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.501225 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/2.log" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.502148 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/1.log" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.505970 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046" exitCode=1 Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.506021 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.506084 4695 scope.go:117] "RemoveContainer" containerID="c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.507811 4695 scope.go:117] "RemoveContainer" containerID="d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046" Nov 26 13:24:25 crc kubenswrapper[4695]: E1126 13:24:25.508220 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.530015 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.542883 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.542942 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.542967 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.543005 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.543029 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.554170 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.574951 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.591664 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.608405 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.623772 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.638038 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.646375 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.646443 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.646467 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.646499 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.646523 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.659748 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.674958 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.696119 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.711035 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.740909 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8fb140c113bfd948df05e28fc98c47c17bfd87d129d8565b34abd4a7a60df18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:08Z\\\",\\\"message\\\":\\\"ostics/network-check-target-xd92c openshift-network-node-identity/network-node-identity-vrzqb openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-image-registry/node-ca-x9bgt openshift-kube-controller-manager/kube-controller-manager-crc]\\\\nI1126 13:24:08.482962 6136 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler/scheduler]} name:Service_openshift-kube-scheduler/scheduler_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.169:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {39432221-5995-412b-967b-35e1a9405ec7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1126 13:24:08.482991 6136 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: fai\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.749411 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.749462 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.749479 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.749506 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.749523 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.758179 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.774274 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.793891 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.808384 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:25Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.852288 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.852394 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.852421 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.852451 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.852474 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.955029 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.955085 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.955103 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.955127 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:25 crc kubenswrapper[4695]: I1126 13:24:25.955145 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:25Z","lastTransitionTime":"2025-11-26T13:24:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.058808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.058868 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.058885 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.058909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.058926 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.161268 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:26 crc kubenswrapper[4695]: E1126 13:24:26.161502 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.162450 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.162512 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.162536 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.162562 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.162586 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.266456 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.266545 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.266564 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.266587 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.266604 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.369594 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.369647 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.369663 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.369688 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.369711 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.452230 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:26 crc kubenswrapper[4695]: E1126 13:24:26.452461 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:26 crc kubenswrapper[4695]: E1126 13:24:26.452581 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:24:42.452548485 +0000 UTC m=+66.088373607 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.471577 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.471634 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.471645 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.471661 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.471672 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.512239 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/2.log" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.517482 4695 scope.go:117] "RemoveContainer" containerID="d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046" Nov 26 13:24:26 crc kubenswrapper[4695]: E1126 13:24:26.517807 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.532315 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.544713 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.558733 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.574646 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.574707 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.574727 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.574751 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.574769 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.578341 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.593705 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.613028 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.631239 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.651469 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.668408 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.682168 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.682234 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.682254 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.682279 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.682304 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.691100 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.710289 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.724855 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.747289 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.760713 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.783788 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.785067 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.785125 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.785137 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.785155 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.785167 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.800903 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:26Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.888113 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.888167 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.888183 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.888205 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.888222 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.990336 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.990399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.990409 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.990426 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:26 crc kubenswrapper[4695]: I1126 13:24:26.990437 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:26Z","lastTransitionTime":"2025-11-26T13:24:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.092989 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.093045 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.093061 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.093087 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.093105 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.161401 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.161515 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:27 crc kubenswrapper[4695]: E1126 13:24:27.161605 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.161736 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:27 crc kubenswrapper[4695]: E1126 13:24:27.161822 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:27 crc kubenswrapper[4695]: E1126 13:24:27.161887 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.182858 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.195031 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.195074 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.195085 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.195101 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.195114 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.199271 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.220227 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.234098 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.249985 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.270757 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.289696 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.297650 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.297693 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.297704 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.297720 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.297734 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.305740 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.329893 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.344246 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.358091 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.376420 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.392185 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.400566 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.400594 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.400605 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.400630 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.400643 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.418276 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.439641 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.451730 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:27Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.503727 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.503780 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.503798 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.503822 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.503841 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.607043 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.607150 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.607175 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.607244 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.607269 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.711492 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.711565 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.711587 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.711617 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.711638 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.814755 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.814831 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.814853 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.814882 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.814903 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.918448 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.918492 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.918504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.918522 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:27 crc kubenswrapper[4695]: I1126 13:24:27.918532 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:27Z","lastTransitionTime":"2025-11-26T13:24:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.021563 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.021627 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.021650 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.021679 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.021703 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.125830 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.125928 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.125952 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.125977 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.125997 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.162052 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.162284 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.228655 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.228747 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.228773 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.228807 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.228829 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.331798 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.331858 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.331871 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.331889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.331921 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.435455 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.435518 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.435533 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.435555 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.435570 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.538511 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.538560 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.538572 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.538589 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.538599 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.642642 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.642714 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.642733 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.642759 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.642783 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.745480 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.745549 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.745590 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.745617 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.745636 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.848234 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.848273 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.848282 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.848297 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.848306 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.879429 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.879637 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:00.879599852 +0000 UTC m=+84.515424954 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.951312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.951396 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.951413 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.951435 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.951452 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:28Z","lastTransitionTime":"2025-11-26T13:24:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.980518 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.980648 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.980698 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.980716 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: I1126 13:24:28.980734 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.980754 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.980773 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.980823 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.980873 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:25:00.980820643 +0000 UTC m=+84.616645755 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.980924 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:25:00.980900795 +0000 UTC m=+84.616725917 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.980917 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.981074 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:25:00.981044349 +0000 UTC m=+84.616869501 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.981083 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.981146 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.981173 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:28 crc kubenswrapper[4695]: E1126 13:24:28.981296 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:25:00.981257876 +0000 UTC m=+84.617083008 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.054812 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.054851 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.054860 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.054873 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.054882 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.158633 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.158679 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.158691 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.158706 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.158715 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.162163 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.162217 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.162183 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:29 crc kubenswrapper[4695]: E1126 13:24:29.162325 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:29 crc kubenswrapper[4695]: E1126 13:24:29.162422 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:29 crc kubenswrapper[4695]: E1126 13:24:29.162656 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.262126 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.262172 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.262184 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.262200 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.262215 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.364863 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.364905 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.364916 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.364929 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.364945 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.473785 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.473851 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.473870 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.473900 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.473924 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.576884 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.576958 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.576981 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.577008 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.577029 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.679711 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.679760 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.679775 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.679793 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.679806 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.783744 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.783819 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.783837 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.783863 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.783880 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.886228 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.886301 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.886324 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.886393 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.886421 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.989279 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.989399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.989424 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.989451 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:29 crc kubenswrapper[4695]: I1126 13:24:29.989475 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:29Z","lastTransitionTime":"2025-11-26T13:24:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.092251 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.092312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.092331 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.092382 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.092401 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.161553 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:30 crc kubenswrapper[4695]: E1126 13:24:30.161789 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.195654 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.195734 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.195759 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.195792 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.195816 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.299540 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.299625 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.299651 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.299683 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.299710 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.402807 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.402873 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.402890 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.402914 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.402932 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.505021 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.505060 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.505077 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.505095 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.505107 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.607859 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.607909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.607925 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.607942 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.607953 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.710501 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.710568 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.710591 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.710619 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.710641 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.813625 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.813836 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.813871 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.813902 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.813938 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.917370 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.917432 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.917443 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.917461 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:30 crc kubenswrapper[4695]: I1126 13:24:30.917475 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:30Z","lastTransitionTime":"2025-11-26T13:24:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.020683 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.020745 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.020762 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.020790 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.020808 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.123399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.123467 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.123489 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.123513 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.123530 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.161610 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.161692 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.161755 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:31 crc kubenswrapper[4695]: E1126 13:24:31.161788 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:31 crc kubenswrapper[4695]: E1126 13:24:31.161879 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:31 crc kubenswrapper[4695]: E1126 13:24:31.161926 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.226339 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.226441 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.226454 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.226473 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.226486 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.329292 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.329438 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.329457 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.329483 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.329500 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.393152 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.408127 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.410533 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.432594 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.432663 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.432686 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.432716 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.432739 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.434417 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.455401 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.472528 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.485466 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.503263 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.513754 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.534836 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.534900 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.534922 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.534952 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.534974 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.544414 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.559514 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.578848 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.598067 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.612064 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.631458 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.638384 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.638449 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.638464 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.638489 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.638505 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.644281 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.657000 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.670468 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:31Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.741851 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.741911 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.741928 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.741951 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.741970 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.845504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.845581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.845602 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.845631 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.845657 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.949203 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.949272 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.949290 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.949393 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:31 crc kubenswrapper[4695]: I1126 13:24:31.949415 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:31Z","lastTransitionTime":"2025-11-26T13:24:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.052158 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.052196 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.052206 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.052220 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.052231 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.154509 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.154559 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.154575 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.154599 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.154616 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.161628 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:32 crc kubenswrapper[4695]: E1126 13:24:32.161810 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.256913 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.256963 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.256975 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.256993 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.257005 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.359067 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.359100 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.359111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.359128 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.359138 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.461739 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.461784 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.461793 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.461808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.461819 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.564278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.564609 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.564715 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.564888 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.564979 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.668173 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.668220 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.668234 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.668253 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.668269 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.771725 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.771790 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.771808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.771832 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.771853 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.874068 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.874131 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.874150 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.874174 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.874191 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.977224 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.977282 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.977298 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.977316 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:32 crc kubenswrapper[4695]: I1126 13:24:32.977331 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:32Z","lastTransitionTime":"2025-11-26T13:24:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.079905 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.080243 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.080528 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.080716 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.080845 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.161530 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.161591 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.161803 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.161848 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.162017 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.162197 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.183431 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.183495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.183506 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.183522 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.183536 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.286495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.286555 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.286567 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.286586 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.286599 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.389468 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.389558 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.389575 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.389642 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.389661 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.492020 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.492072 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.492088 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.492112 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.492128 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.594589 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.594640 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.594653 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.594672 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.594691 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.603995 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.604079 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.604098 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.604128 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.604150 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.622334 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.627026 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.627087 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.627103 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.627129 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.627146 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.645190 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.651021 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.651059 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.651072 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.651089 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.651102 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.664330 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.668213 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.668243 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.668253 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.668269 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.668280 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.683827 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.687754 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.687793 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.687805 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.687822 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.687834 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.705571 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:33Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:33 crc kubenswrapper[4695]: E1126 13:24:33.706306 4695 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.708135 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.708182 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.708211 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.708228 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.708239 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.811056 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.811112 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.811127 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.811148 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.811163 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.914543 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.914612 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.914637 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.914669 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:33 crc kubenswrapper[4695]: I1126 13:24:33.914689 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:33Z","lastTransitionTime":"2025-11-26T13:24:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.018005 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.018066 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.018083 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.018108 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.018126 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.121506 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.121583 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.121595 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.121617 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.121634 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.161310 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:34 crc kubenswrapper[4695]: E1126 13:24:34.161606 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.224537 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.224581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.224592 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.224608 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.224619 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.327309 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.327367 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.327378 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.327394 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.327403 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.430495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.430557 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.430575 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.430599 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.430616 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.533165 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.533212 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.533224 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.533241 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.533253 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.636313 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.636433 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.636464 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.636494 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.636511 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.740078 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.740136 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.740153 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.740173 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.740182 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.843299 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.843377 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.843389 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.843408 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.843420 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.946171 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.946218 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.946227 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.946244 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:34 crc kubenswrapper[4695]: I1126 13:24:34.946261 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:34Z","lastTransitionTime":"2025-11-26T13:24:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.051396 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.051493 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.051528 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.051566 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.051603 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.153788 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.154044 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.154201 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.154409 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.154576 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.162231 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:35 crc kubenswrapper[4695]: E1126 13:24:35.162327 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.162235 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:35 crc kubenswrapper[4695]: E1126 13:24:35.162407 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.162566 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:35 crc kubenswrapper[4695]: E1126 13:24:35.162748 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.257324 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.257406 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.257423 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.257447 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.257461 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.360433 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.360480 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.360495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.360515 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.360528 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.463403 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.463433 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.463444 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.463459 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.463469 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.566957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.567067 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.567087 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.567111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.567135 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.670707 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.670772 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.670791 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.670817 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.670836 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.773490 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.773542 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.773553 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.773570 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.773582 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.876585 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.876648 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.876668 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.876691 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.876707 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.978767 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.978845 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.978860 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.978884 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:35 crc kubenswrapper[4695]: I1126 13:24:35.978900 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:35Z","lastTransitionTime":"2025-11-26T13:24:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.082471 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.082547 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.082564 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.082589 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.082608 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.161220 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:36 crc kubenswrapper[4695]: E1126 13:24:36.161463 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.185757 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.185812 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.185828 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.185925 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.185944 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.289109 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.289148 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.289157 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.289174 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.289184 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.391837 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.391866 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.391875 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.391889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.391898 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.494456 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.494526 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.494549 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.494576 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.494594 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.597571 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.597619 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.597630 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.597650 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.597663 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.700877 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.700938 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.700961 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.700988 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.701012 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.804408 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.804472 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.804495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.804524 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.804546 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.907471 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.907531 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.907544 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.907560 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:36 crc kubenswrapper[4695]: I1126 13:24:36.907572 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:36Z","lastTransitionTime":"2025-11-26T13:24:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.010913 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.010976 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.010998 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.011026 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.011047 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.114991 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.115039 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.115053 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.115077 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.115091 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.161542 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.161607 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.161662 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:37 crc kubenswrapper[4695]: E1126 13:24:37.161732 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:37 crc kubenswrapper[4695]: E1126 13:24:37.162166 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:37 crc kubenswrapper[4695]: E1126 13:24:37.162463 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.179034 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.200488 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.221316 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.222178 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.222455 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.222663 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.222832 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.226466 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.253608 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.275970 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.285319 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.307066 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.320016 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.325559 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.325587 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.325597 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.325618 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.325633 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.332634 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.346160 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.359080 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.376163 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.394549 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.411403 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.429030 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.429247 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.429307 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.429326 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.429378 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.429400 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.447518 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.470052 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:37Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.532584 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.532660 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.532684 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.532717 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.532741 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.636434 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.636882 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.636901 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.636927 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.636945 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.739876 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.739968 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.739986 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.740011 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.740028 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.843153 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.843199 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.843211 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.843226 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.843239 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.946172 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.946254 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.946281 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.946312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:37 crc kubenswrapper[4695]: I1126 13:24:37.946335 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:37Z","lastTransitionTime":"2025-11-26T13:24:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.048964 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.049018 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.049027 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.049042 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.049053 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.151765 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.151818 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.151833 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.151852 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.151865 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.161225 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:38 crc kubenswrapper[4695]: E1126 13:24:38.161418 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.254726 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.254781 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.254792 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.254810 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.255115 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.357471 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.357516 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.357526 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.357542 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.357551 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.459181 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.459224 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.459234 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.459251 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.459266 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.562390 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.562475 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.562499 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.562533 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.562559 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.664790 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.664857 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.664877 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.664901 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.664921 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.766929 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.766972 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.766985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.767011 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.767024 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.869308 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.869414 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.869436 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.869460 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.869477 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.972877 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.972960 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.972985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.973011 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:38 crc kubenswrapper[4695]: I1126 13:24:38.973027 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:38Z","lastTransitionTime":"2025-11-26T13:24:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.076343 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.076435 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.076453 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.076479 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.076504 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.162067 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.162136 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:39 crc kubenswrapper[4695]: E1126 13:24:39.162216 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.162145 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:39 crc kubenswrapper[4695]: E1126 13:24:39.162307 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:39 crc kubenswrapper[4695]: E1126 13:24:39.162466 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.179840 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.179877 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.179889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.179904 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.179916 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.282478 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.282519 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.282528 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.282542 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.282551 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.385620 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.385696 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.385714 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.385746 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.385767 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.487601 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.487636 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.487645 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.487658 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.487669 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.590196 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.590262 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.590288 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.590317 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.590391 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.693105 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.693148 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.693156 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.693169 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.693180 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.795722 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.795772 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.795787 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.795806 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.795821 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.903229 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.903315 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.903392 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.903429 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:39 crc kubenswrapper[4695]: I1126 13:24:39.903455 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:39Z","lastTransitionTime":"2025-11-26T13:24:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.006190 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.006225 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.006238 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.006253 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.006265 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.108941 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.108988 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.109004 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.109025 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.109041 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.162031 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:40 crc kubenswrapper[4695]: E1126 13:24:40.162192 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.163915 4695 scope.go:117] "RemoveContainer" containerID="d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046" Nov 26 13:24:40 crc kubenswrapper[4695]: E1126 13:24:40.165155 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.211741 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.211783 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.211900 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.211925 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.211940 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.314233 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.314279 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.314295 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.314316 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.314332 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.416392 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.417163 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.417309 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.417479 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.417620 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.520725 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.520778 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.520795 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.520819 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.520836 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.623588 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.623640 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.623657 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.623678 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.623696 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.726776 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.726837 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.726856 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.726880 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.726900 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.829921 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.829975 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.829995 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.830020 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.830038 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.932876 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.932922 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.932932 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.932951 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:40 crc kubenswrapper[4695]: I1126 13:24:40.932964 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:40Z","lastTransitionTime":"2025-11-26T13:24:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.035544 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.035631 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.035649 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.035673 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.035690 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.139087 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.139137 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.139152 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.139171 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.139185 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.161539 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.161584 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:41 crc kubenswrapper[4695]: E1126 13:24:41.161698 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.161764 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:41 crc kubenswrapper[4695]: E1126 13:24:41.161941 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:41 crc kubenswrapper[4695]: E1126 13:24:41.162257 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.241840 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.241902 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.241921 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.241944 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.241962 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.344986 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.345051 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.345074 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.345103 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.345123 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.447939 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.447983 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.447992 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.448008 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.448017 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.550591 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.550684 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.550703 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.550726 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.550741 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.653655 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.653696 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.653706 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.653725 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.653736 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.756522 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.756584 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.756604 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.756628 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.756643 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.860095 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.860131 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.860143 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.860159 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.860170 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.962577 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.962643 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.962659 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.962683 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:41 crc kubenswrapper[4695]: I1126 13:24:41.962705 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:41Z","lastTransitionTime":"2025-11-26T13:24:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.065595 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.065656 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.065670 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.065692 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.065709 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.161477 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:42 crc kubenswrapper[4695]: E1126 13:24:42.161717 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.168073 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.168158 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.168175 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.168201 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.168219 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.270446 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.270495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.270509 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.270525 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.270535 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.373904 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.373949 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.373961 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.373983 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.374002 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.476431 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.476475 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.476483 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.476498 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.476508 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.537684 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:42 crc kubenswrapper[4695]: E1126 13:24:42.537921 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:42 crc kubenswrapper[4695]: E1126 13:24:42.538052 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:25:14.53802165 +0000 UTC m=+98.173846932 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.578210 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.578246 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.578257 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.578286 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.578296 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.680469 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.680521 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.680531 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.680545 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.680554 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.783695 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.783736 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.783746 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.783765 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.783775 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.886780 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.886829 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.886838 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.886854 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.886882 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.990084 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.990147 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.990159 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.990175 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:42 crc kubenswrapper[4695]: I1126 13:24:42.990185 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:42Z","lastTransitionTime":"2025-11-26T13:24:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.092182 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.092225 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.092233 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.092250 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.092259 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.161942 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.162005 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.162121 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.162130 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.162316 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.162604 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.194524 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.194573 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.194586 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.194602 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.194611 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.297047 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.297093 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.297106 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.297122 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.297133 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.400127 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.400186 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.400202 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.400225 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.400241 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.503135 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.503204 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.503221 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.503245 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.503255 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.605212 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.605281 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.605292 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.605310 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.605320 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.708799 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.708835 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.708849 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.708865 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.708875 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.811591 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.811651 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.811666 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.811684 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.811696 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.865180 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.865223 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.865234 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.865251 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.865262 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.878063 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.881627 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.881671 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.881681 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.881698 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.881706 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.892881 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.896520 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.896609 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.896628 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.896682 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.896699 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.914814 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.919506 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.919592 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.919609 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.919662 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.919797 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.933845 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.939302 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.939337 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.939365 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.939381 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.939391 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.952513 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:43Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:43 crc kubenswrapper[4695]: E1126 13:24:43.952625 4695 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.954187 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.954216 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.954227 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.954244 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:43 crc kubenswrapper[4695]: I1126 13:24:43.954255 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:43Z","lastTransitionTime":"2025-11-26T13:24:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.056537 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.056586 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.056598 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.056616 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.056628 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.158548 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.158591 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.158618 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.158637 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.158649 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.162182 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:44 crc kubenswrapper[4695]: E1126 13:24:44.162326 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.261021 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.261076 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.261093 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.261117 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.261134 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.363333 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.363391 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.363403 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.363419 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.363432 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.466269 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.466315 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.466326 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.466362 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.466374 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.568628 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.568694 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.568717 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.568747 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.568769 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.672121 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.672159 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.672168 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.672181 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.672192 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.774589 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.774628 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.774638 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.774653 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.774663 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.877102 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.877294 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.877307 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.877326 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.877336 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.980332 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.980400 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.980409 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.980426 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:44 crc kubenswrapper[4695]: I1126 13:24:44.980435 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:44Z","lastTransitionTime":"2025-11-26T13:24:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.083191 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.083237 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.083249 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.083264 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.083274 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.161674 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.161756 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.161860 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:45 crc kubenswrapper[4695]: E1126 13:24:45.162008 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:45 crc kubenswrapper[4695]: E1126 13:24:45.162147 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:45 crc kubenswrapper[4695]: E1126 13:24:45.162260 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.185696 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.185757 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.185772 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.185793 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.185810 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.288715 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.288787 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.288805 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.288829 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.288842 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.392170 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.392209 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.392218 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.392231 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.392239 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.495384 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.495432 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.495444 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.495461 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.495471 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.579881 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/0.log" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.579932 4695 generic.go:334] "Generic (PLEG): container finished" podID="133aab88-6958-4575-aefd-c4675266edd5" containerID="92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0" exitCode=1 Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.579958 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerDied","Data":"92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.580319 4695 scope.go:117] "RemoveContainer" containerID="92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.592543 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.597234 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.597283 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.597295 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.597315 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.597328 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.605278 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.617968 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.631483 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.643036 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.654451 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.670676 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.682625 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.695456 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.702823 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.702866 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.702876 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.702889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.702903 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.711340 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.725644 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.740319 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.756678 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.771300 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.792595 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.805225 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.805718 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.805756 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.805767 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.805785 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.805801 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.817685 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:45Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.908196 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.908273 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.908297 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.908326 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:45 crc kubenswrapper[4695]: I1126 13:24:45.908383 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:45Z","lastTransitionTime":"2025-11-26T13:24:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.011281 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.011385 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.011398 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.011415 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.011427 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.113228 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.113378 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.113400 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.113427 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.113445 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.161936 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:46 crc kubenswrapper[4695]: E1126 13:24:46.162118 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.215480 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.215516 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.215525 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.215539 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.215549 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.318364 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.318404 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.318414 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.318431 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.318441 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.421532 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.421607 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.421620 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.421640 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.421653 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.524930 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.524989 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.525004 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.525028 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.525044 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.585541 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/0.log" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.585607 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerStarted","Data":"d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.598340 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.620822 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.628134 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.628189 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.628201 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.628221 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.628234 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.638485 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.655597 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.669329 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.685184 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.698144 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.712616 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.729543 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.730234 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.730269 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.730278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.730323 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.730336 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.743481 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.757150 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.779620 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.791923 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.804007 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.814775 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.828225 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.831744 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.831784 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.831796 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.831812 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.831823 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.840091 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:46Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.934752 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.934797 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.934814 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.934837 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:46 crc kubenswrapper[4695]: I1126 13:24:46.934855 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:46Z","lastTransitionTime":"2025-11-26T13:24:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.037484 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.037550 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.037568 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.037596 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.037620 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.145118 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.145175 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.145188 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.145397 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.145415 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.161411 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:47 crc kubenswrapper[4695]: E1126 13:24:47.161648 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.161736 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.161806 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:47 crc kubenswrapper[4695]: E1126 13:24:47.161915 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:47 crc kubenswrapper[4695]: E1126 13:24:47.162040 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.178175 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.194453 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.210309 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.233984 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.247496 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.248258 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.248311 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.248330 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.248375 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.248393 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.262236 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.280925 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.292489 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.307748 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.326821 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.342326 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.350786 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.350824 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.350833 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.350846 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.350859 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.358839 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.372872 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.388247 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.400096 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.411998 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.424842 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:47Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.454609 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.454649 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.454662 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.454684 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.454698 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.557225 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.557273 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.557284 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.557382 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.557405 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.660698 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.660731 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.660739 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.660753 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.660765 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.763252 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.763286 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.763295 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.763310 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.763320 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.865270 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.865325 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.865372 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.865405 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.865422 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.971997 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.972231 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.972252 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.972282 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:47 crc kubenswrapper[4695]: I1126 13:24:47.972302 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:47Z","lastTransitionTime":"2025-11-26T13:24:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.075428 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.075472 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.075481 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.075496 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.075506 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.161438 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:48 crc kubenswrapper[4695]: E1126 13:24:48.161606 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.177130 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.177168 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.177180 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.177195 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.177212 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.279946 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.280011 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.280033 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.280065 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.280087 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.382464 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.382541 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.382567 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.382596 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.382618 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.485121 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.485169 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.485186 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.485207 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.485225 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.587913 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.587965 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.587982 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.588018 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.588037 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.690808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.690844 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.690856 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.690870 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.690881 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.792776 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.792837 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.792852 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.792872 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.792886 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.895097 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.895150 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.895160 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.895173 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.895182 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.997301 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.997334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.997362 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.997380 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:48 crc kubenswrapper[4695]: I1126 13:24:48.997391 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:48Z","lastTransitionTime":"2025-11-26T13:24:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.099713 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.099754 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.099764 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.099777 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.099785 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.161514 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.161587 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.161689 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:49 crc kubenswrapper[4695]: E1126 13:24:49.162142 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:49 crc kubenswrapper[4695]: E1126 13:24:49.162241 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:49 crc kubenswrapper[4695]: E1126 13:24:49.162370 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.202164 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.202235 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.202248 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.202265 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.202277 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.304802 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.304865 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.304881 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.304906 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.304925 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.407498 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.407580 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.407603 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.407669 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.407693 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.509844 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.509911 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.509928 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.509971 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.509986 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.613587 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.613667 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.613679 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.613699 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.613712 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.715789 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.715833 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.715846 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.715861 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.715870 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.817734 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.817775 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.817785 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.817800 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.817808 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.920331 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.920452 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.920477 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.920506 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:49 crc kubenswrapper[4695]: I1126 13:24:49.920528 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:49Z","lastTransitionTime":"2025-11-26T13:24:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.023552 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.023594 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.023603 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.023618 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.023627 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.126340 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.126410 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.126423 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.126443 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.126459 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.161935 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:50 crc kubenswrapper[4695]: E1126 13:24:50.162088 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.229270 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.229296 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.229305 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.229321 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.229334 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.332627 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.332708 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.332726 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.332748 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.332766 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.435168 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.435220 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.435240 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.435269 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.435543 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.541768 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.541818 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.541829 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.541848 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.541860 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.644814 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.644886 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.644903 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.644923 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.644978 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.747789 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.747851 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.747867 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.747889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.747905 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.850429 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.850503 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.850518 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.850536 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.850547 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.953427 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.953475 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.953488 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.953507 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:50 crc kubenswrapper[4695]: I1126 13:24:50.953517 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:50Z","lastTransitionTime":"2025-11-26T13:24:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.056452 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.056520 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.056543 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.056574 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.056595 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.159181 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.159238 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.159254 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.159278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.159295 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.162417 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.162514 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:51 crc kubenswrapper[4695]: E1126 13:24:51.162669 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:51 crc kubenswrapper[4695]: E1126 13:24:51.162732 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.163275 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:51 crc kubenswrapper[4695]: E1126 13:24:51.163508 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.261785 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.261834 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.261847 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.261864 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.261876 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.364759 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.364813 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.364829 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.364891 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.364912 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.468417 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.468503 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.468530 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.468564 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.468587 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.571124 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.571191 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.571213 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.571241 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.571261 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.674191 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.674284 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.674301 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.674329 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.674390 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.776985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.777043 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.777054 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.777075 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.777086 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.880005 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.880083 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.880109 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.880137 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.880158 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.982922 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.982987 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.983008 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.983032 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:51 crc kubenswrapper[4695]: I1126 13:24:51.983051 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:51Z","lastTransitionTime":"2025-11-26T13:24:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.085533 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.085673 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.085689 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.085708 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.085721 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.161844 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:52 crc kubenswrapper[4695]: E1126 13:24:52.162061 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.191477 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.191564 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.191581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.191610 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.191623 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.294909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.294981 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.294997 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.295016 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.295030 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.397869 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.397949 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.397973 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.398004 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.398049 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.500192 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.500235 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.500252 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.500277 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.500295 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.603141 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.603188 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.603197 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.603214 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.603225 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.706865 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.706908 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.706918 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.706934 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.706945 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.810526 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.810572 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.810583 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.810601 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.810613 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.913295 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.913381 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.913394 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.913437 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:52 crc kubenswrapper[4695]: I1126 13:24:52.913451 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:52Z","lastTransitionTime":"2025-11-26T13:24:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.016509 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.016586 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.016610 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.016643 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.016665 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.119201 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.119253 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.119264 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.119279 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.119290 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.161648 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:53 crc kubenswrapper[4695]: E1126 13:24:53.161799 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.161671 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.161648 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:53 crc kubenswrapper[4695]: E1126 13:24:53.161886 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:53 crc kubenswrapper[4695]: E1126 13:24:53.162101 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.222016 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.222056 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.222065 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.222079 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.222090 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.324312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.324393 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.324410 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.324432 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.324449 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.427430 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.427524 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.427548 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.427578 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.427601 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.530546 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.530600 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.530618 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.530641 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.530660 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.633844 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.633889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.633899 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.633913 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.633922 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.736257 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.736295 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.736305 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.736320 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.736331 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.838823 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.838859 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.838870 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.838886 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.838897 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.941631 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.941672 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.941683 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.941698 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:53 crc kubenswrapper[4695]: I1126 13:24:53.941709 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:53Z","lastTransitionTime":"2025-11-26T13:24:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.044861 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.044926 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.044946 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.044972 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.044989 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.069006 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.069069 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.069084 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.069106 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.069118 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: E1126 13:24:54.090899 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.096026 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.096096 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.096114 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.096138 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.096157 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: E1126 13:24:54.117689 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.123090 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.123142 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.123159 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.123185 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.123203 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: E1126 13:24:54.144299 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.149309 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.149561 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.149710 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.149860 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.149998 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.161788 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:54 crc kubenswrapper[4695]: E1126 13:24:54.162578 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.163167 4695 scope.go:117] "RemoveContainer" containerID="d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046" Nov 26 13:24:54 crc kubenswrapper[4695]: E1126 13:24:54.172490 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.176853 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.177208 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.177680 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.177934 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.178152 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: E1126 13:24:54.202822 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: E1126 13:24:54.202938 4695 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.205088 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.205113 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.205121 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.205136 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.205145 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.309497 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.309558 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.309576 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.309600 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.309620 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.412058 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.412100 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.412111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.412132 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.412146 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.515785 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.515842 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.515859 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.515881 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.515898 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.612979 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/2.log" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.616756 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.617729 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.617845 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.617867 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.617892 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.617907 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.617918 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.639821 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.654707 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.667532 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.678217 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.689315 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.701232 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.710319 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.719709 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.719748 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.719697 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.719764 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.719938 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.719953 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.730693 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.742983 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.755122 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.770827 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.789738 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.815124 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.822629 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.822662 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.822674 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.822689 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.822700 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.824422 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.834770 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.844726 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.925029 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.925074 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.925085 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.925120 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:54 crc kubenswrapper[4695]: I1126 13:24:54.925132 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:54Z","lastTransitionTime":"2025-11-26T13:24:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.027669 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.027770 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.027795 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.027826 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.027853 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.130951 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.131012 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.131035 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.131064 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.131087 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.161685 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.161767 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:55 crc kubenswrapper[4695]: E1126 13:24:55.161873 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.161930 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:55 crc kubenswrapper[4695]: E1126 13:24:55.162159 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:55 crc kubenswrapper[4695]: E1126 13:24:55.162340 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.234334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.234419 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.234436 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.234460 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.234479 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.337551 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.337627 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.337653 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.337681 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.337703 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.440554 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.440640 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.440659 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.440690 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.440714 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.544205 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.544252 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.544262 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.544276 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.544286 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.622788 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/3.log" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.628995 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/2.log" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.633047 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5" exitCode=1 Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.633101 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.633177 4695 scope.go:117] "RemoveContainer" containerID="d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.634168 4695 scope.go:117] "RemoveContainer" containerID="fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5" Nov 26 13:24:55 crc kubenswrapper[4695]: E1126 13:24:55.634448 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.646600 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.646649 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.646659 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.646675 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.646685 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.656775 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.676956 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.689425 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.704524 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.721237 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.736200 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.749049 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.749084 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.749094 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.749108 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.749120 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.749333 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.763487 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.775629 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.794971 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1a7f245521576b2d690144dd2f8b8279875ce3b004b3fa88501315755a74046\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:24Z\\\",\\\"message\\\":\\\" 6350 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612095 6350 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1126 13:24:24.612376 6350 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1126 13:24:24.612773 6350 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1126 13:24:24.612823 6350 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1126 13:24:24.612825 6350 handler.go:208] Removed *v1.Node event handler 7\\\\nI1126 13:24:24.612833 6350 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1126 13:24:24.612848 6350 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1126 13:24:24.612869 6350 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1126 13:24:24.612901 6350 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1126 13:24:24.612907 6350 factory.go:656] Stopping watch factory\\\\nI1126 13:24:24.612953 6350 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1126 13:24:24.612921 6350 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:55Z\\\",\\\"message\\\":\\\"ork-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1126 13:24:55.030884 6719 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1126 13:24:55.030881 6719 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z]\\\\nI1126 13:24:55.030896 6719 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.806223 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.818318 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.827972 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.839581 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.850888 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.851187 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.851207 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.851219 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.851235 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.851245 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.861938 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.873977 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:55Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.953926 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.953991 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.954012 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.954041 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:55 crc kubenswrapper[4695]: I1126 13:24:55.954062 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:55Z","lastTransitionTime":"2025-11-26T13:24:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.057460 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.057499 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.057510 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.057526 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.057539 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.160926 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.160983 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.161005 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.161034 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.161054 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.161486 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:56 crc kubenswrapper[4695]: E1126 13:24:56.161688 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.264095 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.264160 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.264178 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.264201 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.264217 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.367403 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.367447 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.367457 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.367472 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.367483 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.470442 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.470523 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.470550 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.470581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.470602 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.574168 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.574219 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.574236 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.574258 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.574275 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.638788 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/3.log" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.642341 4695 scope.go:117] "RemoveContainer" containerID="fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5" Nov 26 13:24:56 crc kubenswrapper[4695]: E1126 13:24:56.642523 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.658047 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.675213 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.676806 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.676847 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.676860 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.676876 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.676888 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.689468 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.705068 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.715969 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.727583 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.740768 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.753380 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.766328 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.779857 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.779920 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.779939 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.779964 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.779983 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.790878 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:55Z\\\",\\\"message\\\":\\\"ork-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1126 13:24:55.030884 6719 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1126 13:24:55.030881 6719 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z]\\\\nI1126 13:24:55.030896 6719 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.803541 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.820585 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.835528 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.849742 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.862763 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.876603 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.882112 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.882152 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.882164 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.882181 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.882191 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.892621 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:56Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.985551 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.985999 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.986219 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.986402 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:56 crc kubenswrapper[4695]: I1126 13:24:56.986563 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:56Z","lastTransitionTime":"2025-11-26T13:24:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.090594 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.090682 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.090695 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.090717 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.090728 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.161601 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:57 crc kubenswrapper[4695]: E1126 13:24:57.161848 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.162143 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.162275 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:57 crc kubenswrapper[4695]: E1126 13:24:57.162369 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:57 crc kubenswrapper[4695]: E1126 13:24:57.162454 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.183620 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.194332 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.194558 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.194734 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.194873 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.195043 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.203023 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.217976 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.229892 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.247172 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.266824 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.281030 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.297912 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.297971 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.297986 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.298002 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.298016 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.300393 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.317452 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.334387 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.349386 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.363457 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.381045 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.400052 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.401110 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.401151 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.401165 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.401185 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.401197 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.412367 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.428620 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:55Z\\\",\\\"message\\\":\\\"ork-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1126 13:24:55.030884 6719 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1126 13:24:55.030881 6719 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z]\\\\nI1126 13:24:55.030896 6719 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.446064 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:57Z is after 2025-08-24T17:21:41Z" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.503630 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.503723 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.503751 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.503784 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.503810 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.607939 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.609170 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.609201 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.609230 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.609253 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.713268 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.713330 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.713372 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.713399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.713418 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.816740 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.816799 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.816822 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.816878 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.816896 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.920521 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.920600 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.920625 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.920650 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:57 crc kubenswrapper[4695]: I1126 13:24:57.920668 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:57Z","lastTransitionTime":"2025-11-26T13:24:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.024213 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.024275 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.024292 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.024318 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.024338 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.126786 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.126852 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.126871 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.126904 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.126923 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.161547 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:24:58 crc kubenswrapper[4695]: E1126 13:24:58.161800 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.230377 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.230436 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.230449 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.230467 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.230481 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.332911 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.332967 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.332985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.333008 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.333026 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.436417 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.436479 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.436501 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.436531 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.436553 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.539494 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.539546 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.539563 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.539585 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.539601 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.641734 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.641775 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.641785 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.641799 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.641808 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.744186 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.744248 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.744271 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.744301 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.744322 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.847637 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.847687 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.847699 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.847717 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.847731 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.950805 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.950874 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.950909 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.950947 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:58 crc kubenswrapper[4695]: I1126 13:24:58.950971 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:58Z","lastTransitionTime":"2025-11-26T13:24:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.054102 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.054148 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.054157 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.054169 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.054179 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.156119 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.156166 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.156177 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.156192 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.156203 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.161693 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.161739 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:24:59 crc kubenswrapper[4695]: E1126 13:24:59.161829 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.161705 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:24:59 crc kubenswrapper[4695]: E1126 13:24:59.161919 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:24:59 crc kubenswrapper[4695]: E1126 13:24:59.162003 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.259144 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.259183 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.259192 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.259205 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.259214 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.362023 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.362081 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.362094 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.362111 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.362123 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.464733 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.464783 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.464795 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.464814 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.464826 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.567446 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.567498 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.567509 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.567529 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.567545 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.670789 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.670838 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.670855 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.670880 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.670897 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.774097 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.774151 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.774168 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.774192 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.774211 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.877056 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.877113 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.877130 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.877151 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.877167 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.979769 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.979824 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.979845 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.979871 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:24:59 crc kubenswrapper[4695]: I1126 13:24:59.979887 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:24:59Z","lastTransitionTime":"2025-11-26T13:24:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.082100 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.082152 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.082169 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.082190 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.082205 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.161454 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:00 crc kubenswrapper[4695]: E1126 13:25:00.161600 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.185535 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.185621 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.185648 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.185675 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.185694 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.289193 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.289303 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.289327 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.289395 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.289428 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.392873 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.392940 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.392963 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.392994 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.393019 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.495917 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.495993 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.496021 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.496049 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.496070 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.598468 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.598510 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.598522 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.598537 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.598548 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.701559 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.701623 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.701657 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.701689 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.701711 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.804845 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.804916 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.804942 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.804972 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.804992 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.907991 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.908039 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.908050 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.908066 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.908076 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:00Z","lastTransitionTime":"2025-11-26T13:25:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:00 crc kubenswrapper[4695]: I1126 13:25:00.945256 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:00 crc kubenswrapper[4695]: E1126 13:25:00.945740 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.945659685 +0000 UTC m=+148.581484777 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.012247 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.012421 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.012442 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.012469 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.012488 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.046384 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.046534 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.046593 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.046645 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046675 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046727 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046755 4695 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046763 4695 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046795 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046807 4695 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046835 4695 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046964 4695 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.046868 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.046830309 +0000 UTC m=+148.682655431 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.047065 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.047040436 +0000 UTC m=+148.682865598 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.047111 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.047092437 +0000 UTC m=+148.682917549 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.047152 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.047136719 +0000 UTC m=+148.682961881 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.115802 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.115861 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.115970 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.116010 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.116032 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.161686 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.162091 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.162168 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.162537 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.162796 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:01 crc kubenswrapper[4695]: E1126 13:25:01.163064 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.179853 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.184148 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.218949 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.219011 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.219027 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.219051 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.219070 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.322568 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.322641 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.322658 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.322685 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.322702 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.425772 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.425829 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.425847 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.425868 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.425882 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.529297 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.529417 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.529437 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.529460 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.529476 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.632878 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.632930 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.632942 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.632957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.632969 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.735487 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.735537 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.735555 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.735574 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.735586 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.838278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.838383 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.838410 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.838437 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.838459 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.941510 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.941552 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.941564 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.941579 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:01 crc kubenswrapper[4695]: I1126 13:25:01.941591 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:01Z","lastTransitionTime":"2025-11-26T13:25:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.044648 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.044684 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.044692 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.044704 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.044729 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.147383 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.147669 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.147677 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.147691 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.147699 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.161705 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:02 crc kubenswrapper[4695]: E1126 13:25:02.161798 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.250561 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.250624 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.250644 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.250668 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.250690 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.353504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.353552 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.353563 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.353579 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.353591 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.456929 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.456959 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.456971 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.456993 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.457005 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.559256 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.559326 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.559343 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.559391 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.559402 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.662736 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.662796 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.662814 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.662837 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.662854 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.764999 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.765061 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.765084 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.765108 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.765124 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.867955 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.868021 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.868038 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.868060 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.868077 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.970859 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.970930 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.970958 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.970985 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:02 crc kubenswrapper[4695]: I1126 13:25:02.971007 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:02Z","lastTransitionTime":"2025-11-26T13:25:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.074197 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.074267 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.074290 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.074317 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.074340 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.161857 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:03 crc kubenswrapper[4695]: E1126 13:25:03.161999 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.161863 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.161857 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:03 crc kubenswrapper[4695]: E1126 13:25:03.162249 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:03 crc kubenswrapper[4695]: E1126 13:25:03.162579 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.176410 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.176461 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.176478 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.176498 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.176520 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.278608 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.278689 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.278727 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.278757 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.278776 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.381804 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.381852 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.381864 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.381879 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.381889 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.484801 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.484850 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.484864 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.484881 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.484894 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.586726 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.586808 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.586834 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.586859 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.586876 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.688988 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.689024 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.689035 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.689050 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.689061 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.791800 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.791849 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.791861 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.791878 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.791888 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.893865 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.893897 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.893905 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.893917 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.893925 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.996476 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.996529 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.996541 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.996556 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:03 crc kubenswrapper[4695]: I1126 13:25:03.996579 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:03Z","lastTransitionTime":"2025-11-26T13:25:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.099720 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.099777 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.099795 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.099818 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.099835 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.161611 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:04 crc kubenswrapper[4695]: E1126 13:25:04.161815 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.201915 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.201957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.201968 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.201982 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.201992 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.236406 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.236469 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.236487 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.236510 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.236528 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: E1126 13:25:04.257804 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.262397 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.262474 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.262496 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.262520 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.262539 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: E1126 13:25:04.277750 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.281334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.281416 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.281429 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.281445 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.281457 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: E1126 13:25:04.297511 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.301296 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.301369 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.301383 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.301403 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.301413 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: E1126 13:25:04.320500 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.324504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.324547 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.324559 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.324612 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.324626 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: E1126 13:25:04.338828 4695 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-26T13:25:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2a904109-f06a-4e5e-98fe-96acd68c2c44\\\",\\\"systemUUID\\\":\\\"38c50ac0-92c3-4f5b-bd42-96718c941574\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:04Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:04 crc kubenswrapper[4695]: E1126 13:25:04.339051 4695 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.340957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.341002 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.341011 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.341024 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.341032 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.444133 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.444185 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.444202 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.444225 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.444244 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.547573 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.547624 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.547636 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.547651 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.547663 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.650327 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.650376 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.650384 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.650399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.650408 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.752529 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.752594 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.752615 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.752638 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.752659 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.856059 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.856109 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.856120 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.856136 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.856146 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.959459 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.959528 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.959551 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.959580 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:04 crc kubenswrapper[4695]: I1126 13:25:04.959603 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:04Z","lastTransitionTime":"2025-11-26T13:25:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.062708 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.062765 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.062778 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.062798 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.062811 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.161509 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.161593 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.161536 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:05 crc kubenswrapper[4695]: E1126 13:25:05.161703 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:05 crc kubenswrapper[4695]: E1126 13:25:05.161794 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:05 crc kubenswrapper[4695]: E1126 13:25:05.161914 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.165960 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.166025 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.166099 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.166122 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.166139 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.269147 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.269202 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.269213 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.269228 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.269240 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.372635 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.372714 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.372740 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.372770 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.372795 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.475452 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.475495 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.475507 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.475523 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.475535 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.578225 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.578394 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.578429 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.578458 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.578479 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.680971 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.681014 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.681024 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.681038 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.681049 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.783984 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.784024 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.784036 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.784054 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.784067 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.886228 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.886266 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.886278 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.886294 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.886307 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.988625 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.988675 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.988686 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.988704 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:05 crc kubenswrapper[4695]: I1126 13:25:05.988714 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:05Z","lastTransitionTime":"2025-11-26T13:25:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.091233 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.091263 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.091272 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.091286 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.091295 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.162198 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:06 crc kubenswrapper[4695]: E1126 13:25:06.162333 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.193841 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.193877 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.193889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.193905 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.193916 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.296889 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.296965 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.296988 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.297019 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.297044 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.399640 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.399708 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.399727 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.399751 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.399770 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.502312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.502401 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.502421 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.502448 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.502467 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.605755 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.605802 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.605818 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.605840 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.605857 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.708627 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.708676 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.708693 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.708713 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.708730 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.811527 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.811609 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.811635 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.811670 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.811690 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.914454 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.914482 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.914490 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.914503 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:06 crc kubenswrapper[4695]: I1126 13:25:06.914512 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:06Z","lastTransitionTime":"2025-11-26T13:25:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.017090 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.017162 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.017187 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.017219 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.017241 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.119674 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.119723 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.119734 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.119751 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.119765 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.161442 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.161506 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:07 crc kubenswrapper[4695]: E1126 13:25:07.161643 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.161788 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:07 crc kubenswrapper[4695]: E1126 13:25:07.161886 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:07 crc kubenswrapper[4695]: E1126 13:25:07.162023 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.182749 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c98c71d6a85b36f1e84de47adca1321a18f11c40709688dbf0a17633bde32fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.203957 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-hgtpx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"133aab88-6958-4575-aefd-c4675266edd5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:44Z\\\",\\\"message\\\":\\\"2025-11-26T13:23:59+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25\\\\n2025-11-26T13:23:59+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ae90164d-e6f2-485b-b7ee-605192d5cd25 to /host/opt/cni/bin/\\\\n2025-11-26T13:23:59Z [verbose] multus-daemon started\\\\n2025-11-26T13:23:59Z [verbose] Readiness Indicator file check\\\\n2025-11-26T13:24:44Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hx2zx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-hgtpx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.221923 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7f1ec97-41ee-4e26-b9b0-8fa7d353a5f6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bb576dc1a2d359f23a81b918ba5996609d5302b93a29eaa38aaf8962feab5de6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1a238dd4fbf9adbaf353a6cc52f2ed8ed4fe8a7d81dd73db87a146e932854dc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://366cd857ee873827f186f8308cb1b7ae5b8f0afc6f80ba0bc13082c9cfd85a37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d71b259d74dfe9f252cb0ba7e2432409878d0da7f60cf9d9792a1105a593bbb9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.225240 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.225940 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.225957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.225978 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.225992 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.238894 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9294908a-fb2d-4b41-b754-46ae6e357e11\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"message\\\":\\\"W1126 13:23:40.300398 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1126 13:23:40.300713 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764163420 cert, and key in /tmp/serving-cert-2907851818/serving-signer.crt, /tmp/serving-cert-2907851818/serving-signer.key\\\\nI1126 13:23:40.823361 1 observer_polling.go:159] Starting file observer\\\\nW1126 13:23:50.827976 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1126 13:23:50.828166 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1126 13:23:50.831176 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2907851818/tls.crt::/tmp/serving-cert-2907851818/tls.key\\\\\\\"\\\\nI1126 13:23:56.312931 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1126 13:23:56.316507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1126 13:23:56.316534 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1126 13:23:56.316560 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1126 13:23:56.316567 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nF1126 13:23:56.324241 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.252005 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.267301 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4272f55-b840-43d1-bae3-5f3fa57b1ec6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ee2c689d5442f927a53146975decaefd4a7e60ee79f428765b56528d0b9ced2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4ae28be72dfac8e8ec0f21f8170c0580cd21f3e5e27735c25ce393a1809da6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f6defc5a16c779ef8342d41b362d1ef159a6227c2f8865ea0c47349d1a994e92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://036455a98268a4454b4b359e24397085d0e7e2881e7d60cde2f839dffc1b9213\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c117a4e9f13f42b9a7f3bd93955184fbc3295c50dc5beeed723abc629ff82de7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f69ecdf74f885aa63869f94fcbc87c1683cb2d3f6b929e50c0b4376713a9cf7f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39fc96d3436c9656cbf49f8362eba2711c85691a5e9cb535a5388e04942acb93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:24:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pbhzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-r5n2z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.279586 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-x9bgt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36bda2fb-93f6-4855-8099-a24645fa17e2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3f7c9f20237db9e5b935c7041ae8eb3302e204aab9af824cf011e53f11ca7736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mw8b4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-x9bgt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.292380 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffee5f31-90c6-4596-9a07-9c3aa1725cb3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://680591c8222358651f94dc2f672d90c5786115087154c87fa267c3c86e764e46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://769c42558d83099c2d42d201fd9e31da4408f91a5db44c941997831894c3bdef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bd28c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-wwcjg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.304927 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"755825f0-d565-4a02-8a54-8f9be77991d6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:24:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q95hj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:24:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-l9n9h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.318509 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8052e47-426a-45b4-bb9c-f13cc8d44e1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c91627a542bd913053affc7fdfc48889b180b899de2436b3137417e3173c472\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aea7b67799276f736c3a10bb3ac12f5300ef604e0b3f1d61b99fdce84d93913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0aea7b67799276f736c3a10bb3ac12f5300ef604e0b3f1d61b99fdce84d93913\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.328679 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.328734 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.328751 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.328776 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.328794 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.348842 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"06c7f72d-f905-490b-8b53-57fa9b894ace\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a612d4de026cb4d82be39949a238a35f286234b21b03e1a8400f7dfc23989da4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df1f2bf5949e83a2deebf90e08e56dc63d93b4a3521b246036627216abbc6a6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86f00cf859dd7770416e39cc1fb4dd07697572ab1bb23bcbf4fe605ee1286303\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08d46f528704409795ae69ca0fb5667276ae061e00ecfc74a3c5cf49187d0f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c48a1733d0598496d18edd8345db071466d28fa83e7bf4531080a9dbd10a605e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9529b606245f24a456aca8b461125946fdc44c82fad6576299ed9e1d2a28f425\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9529b606245f24a456aca8b461125946fdc44c82fad6576299ed9e1d2a28f425\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6a3c5fbe50fff4d819077adb5ec1382b264979f3b7c7fdaf80493babf77dd2ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6a3c5fbe50fff4d819077adb5ec1382b264979f3b7c7fdaf80493babf77dd2ce\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://8db312de9c19928748f17c7e6f713cc8c003cedb48ad4ac58730d5eb3b759a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8db312de9c19928748f17c7e6f713cc8c003cedb48ad4ac58730d5eb3b759a60\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.366493 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"05aa5d3e-e9c2-4293-9584-6926d8548d33\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de310daa446a90bfa474eb327aa05e0ef146da2fa7398b6e2ae773cc6c623ec3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e9c780a6aaeb50048c149d59a0d1407e376040accaccd87566d9da22cdcfe415\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af806ecbb72bc713362e7079f7215bfd9f295aa7c1b61632f7163198deaa104d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.383896 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://27818bb550b3677ee33ff27f6ca68cc5e3b21ab8deff2b090dac993f835776f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e01a25f5e2f208cff07a25ac1b610991cb58a6e2f34693e65b86b09dfd45545\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.399575 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"73cbd5f2-751e-49c2-b804-e81b9ca46cd4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82a69db3a4fe5dfcd048f52ea2bc781f8b52fcabc3e6b8cc00de6d0e4ca9e609\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mqsvv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mmgd2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.429832 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-26T13:24:55Z\\\",\\\"message\\\":\\\"ork-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1126 13:24:55.030884 6719 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nF1126 13:24:55.030881 6719 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:24:54Z is after 2025-08-24T17:21:41Z]\\\\nI1126 13:24:55.030896 6719 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-scheduler/openshift-kube-scheduler-crc\\\\nI1126 13:24:55.030901 6719\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-26T13:24:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:24:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sw8dj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-qc7jt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.431119 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.431177 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.431194 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.431217 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.431234 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.446958 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e5c3d59eea1245398814432329f7d20974af5c4731e19dbfdc526a1f95b418d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.466545 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.485839 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.499200 4695 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pslgh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b0bd1ae7-27db-479a-9f8e-256980eef3be\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-26T13:23:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0a6850f456c6b95e8db40a0771d7c04bae71f2149f0a63bd1b478ba72f044ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-26T13:23:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42wpj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-26T13:23:57Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pslgh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-26T13:25:07Z is after 2025-08-24T17:21:41Z" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.533151 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.533193 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.533204 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.533220 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.533233 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.635725 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.635849 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.635868 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.635903 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.635922 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.739130 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.739198 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.739221 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.739250 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.739268 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.842312 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.842425 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.842455 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.842483 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.842504 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.945274 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.945329 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.945380 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.945406 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:07 crc kubenswrapper[4695]: I1126 13:25:07.945423 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:07Z","lastTransitionTime":"2025-11-26T13:25:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.047179 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.047221 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.047230 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.047243 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.047250 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.149971 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.150029 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.150046 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.150069 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.150086 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.161327 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:08 crc kubenswrapper[4695]: E1126 13:25:08.161438 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.253321 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.253446 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.253473 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.253500 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.253522 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.356145 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.356223 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.356249 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.356277 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.356296 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.460038 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.460102 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.460119 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.460143 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.460161 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.563114 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.563190 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.563208 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.563232 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.563251 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.666461 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.666520 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.666553 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.666581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.666603 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.769866 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.769902 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.769911 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.769925 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.769934 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.872266 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.872334 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.872366 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.872382 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.872394 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.975265 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.975327 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.975369 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.975393 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:08 crc kubenswrapper[4695]: I1126 13:25:08.975411 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:08Z","lastTransitionTime":"2025-11-26T13:25:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.078595 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.078678 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.078696 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.078729 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.078767 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.161677 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.161772 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:09 crc kubenswrapper[4695]: E1126 13:25:09.161864 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.161956 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:09 crc kubenswrapper[4695]: E1126 13:25:09.162105 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:09 crc kubenswrapper[4695]: E1126 13:25:09.162192 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.163256 4695 scope.go:117] "RemoveContainer" containerID="fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5" Nov 26 13:25:09 crc kubenswrapper[4695]: E1126 13:25:09.163592 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.181825 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.181901 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.181926 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.181957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.181980 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.284930 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.285002 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.285016 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.285031 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.285043 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.388022 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.388105 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.388124 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.388149 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.388167 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.492293 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.492386 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.492405 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.492452 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.492470 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.595504 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.595566 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.595576 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.595588 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.595598 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.698411 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.698477 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.698498 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.698524 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.698545 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.801945 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.801996 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.802009 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.802026 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.802038 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.905400 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.905445 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.905458 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.905472 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:09 crc kubenswrapper[4695]: I1126 13:25:09.905483 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:09Z","lastTransitionTime":"2025-11-26T13:25:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.008168 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.008216 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.008229 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.008248 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.008260 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.112164 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.112223 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.112240 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.112264 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.112285 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.161673 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:10 crc kubenswrapper[4695]: E1126 13:25:10.161919 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.215554 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.215611 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.215621 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.215641 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.215654 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.319024 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.319102 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.319128 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.319157 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.319179 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.421919 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.421960 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.421972 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.421988 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.422001 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.524898 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.524938 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.524987 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.525004 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.525016 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.628394 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.628473 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.628492 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.628521 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.628539 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.731962 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.732021 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.732036 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.732057 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.732071 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.834773 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.834829 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.834840 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.834868 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.834883 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.938235 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.938303 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.938321 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.938381 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:10 crc kubenswrapper[4695]: I1126 13:25:10.938402 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:10Z","lastTransitionTime":"2025-11-26T13:25:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.041546 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.041627 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.041652 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.041687 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.041709 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.145699 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.145764 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.145773 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.145790 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.145800 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.161703 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.161751 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.161889 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:11 crc kubenswrapper[4695]: E1126 13:25:11.161886 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:11 crc kubenswrapper[4695]: E1126 13:25:11.161991 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:11 crc kubenswrapper[4695]: E1126 13:25:11.162090 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.248809 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.248886 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.248906 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.248936 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.248956 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.351786 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.351844 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.351856 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.351879 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.351892 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.454838 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.454935 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.454955 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.454982 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.455004 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.559143 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.559264 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.559291 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.559325 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.559387 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.662279 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.662336 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.662379 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.662399 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.662415 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.764470 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.764531 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.764555 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.764584 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.764605 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.868147 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.868223 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.868246 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.868275 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.868299 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.970867 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.970941 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.970964 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.970991 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:11 crc kubenswrapper[4695]: I1126 13:25:11.971007 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:11Z","lastTransitionTime":"2025-11-26T13:25:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.073966 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.074022 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.074068 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.074085 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.074096 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.161805 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:12 crc kubenswrapper[4695]: E1126 13:25:12.162071 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.176800 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.176856 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.176869 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.176894 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.176914 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.279956 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.280019 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.280038 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.280094 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.280113 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.382807 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.382845 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.382862 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.382876 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.382886 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.486304 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.486392 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.486412 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.486440 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.486480 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.588787 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.588829 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.588863 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.588880 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.588890 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.692173 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.692231 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.692245 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.692269 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.692284 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.795195 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.795259 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.795281 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.795310 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.795332 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.898751 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.898797 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.898809 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.898827 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:12 crc kubenswrapper[4695]: I1126 13:25:12.898840 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:12Z","lastTransitionTime":"2025-11-26T13:25:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.002502 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.002581 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.002604 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.002633 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.002656 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.106103 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.106187 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.106210 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.106239 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.106258 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.162010 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.162046 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:13 crc kubenswrapper[4695]: E1126 13:25:13.162190 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.162251 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:13 crc kubenswrapper[4695]: E1126 13:25:13.162443 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:13 crc kubenswrapper[4695]: E1126 13:25:13.162597 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.208977 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.209033 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.209051 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.209073 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.209089 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.311827 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.311877 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.311894 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.311916 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.311939 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.414842 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.414913 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.414927 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.414960 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.414976 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.517497 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.517538 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.517547 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.517559 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.517567 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.620872 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.620929 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.620948 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.620971 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.620988 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.724193 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.724256 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.724274 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.724304 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.724322 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.826957 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.827022 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.827039 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.827062 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.827079 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.931177 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.931259 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.931277 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.931304 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:13 crc kubenswrapper[4695]: I1126 13:25:13.931324 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:13Z","lastTransitionTime":"2025-11-26T13:25:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.034299 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.034425 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.034447 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.034481 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.034502 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:14Z","lastTransitionTime":"2025-11-26T13:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.137703 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.137774 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.137797 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.137828 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.137852 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:14Z","lastTransitionTime":"2025-11-26T13:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.161288 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:14 crc kubenswrapper[4695]: E1126 13:25:14.161600 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.240659 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.240699 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.240710 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.240727 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.240748 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:14Z","lastTransitionTime":"2025-11-26T13:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.349997 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.350080 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.350107 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.350136 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.350158 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:14Z","lastTransitionTime":"2025-11-26T13:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.352981 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.353018 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.353028 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.353041 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.353051 4695 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-26T13:25:14Z","lastTransitionTime":"2025-11-26T13:25:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.409038 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz"] Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.409506 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.411668 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.411892 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.412784 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.414283 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.445400 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.445378083 podStartE2EDuration="43.445378083s" podCreationTimestamp="2025-11-26 13:24:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.431189153 +0000 UTC m=+98.067014235" watchObservedRunningTime="2025-11-26 13:25:14.445378083 +0000 UTC m=+98.081203165" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.467938 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=78.467908496 podStartE2EDuration="1m18.467908496s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.44781212 +0000 UTC m=+98.083637212" watchObservedRunningTime="2025-11-26 13:25:14.467908496 +0000 UTC m=+98.103733618" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.494054 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.494596 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.494709 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.494789 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.495027 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.509702 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-r5n2z" podStartSLOduration=78.509668348 podStartE2EDuration="1m18.509668348s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.499057843 +0000 UTC m=+98.134882925" watchObservedRunningTime="2025-11-26 13:25:14.509668348 +0000 UTC m=+98.145493460" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.522844 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-x9bgt" podStartSLOduration=77.522800914 podStartE2EDuration="1m17.522800914s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.509520494 +0000 UTC m=+98.145345586" watchObservedRunningTime="2025-11-26 13:25:14.522800914 +0000 UTC m=+98.158625996" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.523196 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-wwcjg" podStartSLOduration=77.523190836 podStartE2EDuration="1m17.523190836s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.523038092 +0000 UTC m=+98.158863194" watchObservedRunningTime="2025-11-26 13:25:14.523190836 +0000 UTC m=+98.159015918" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.535129 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=13.535098014 podStartE2EDuration="13.535098014s" podCreationTimestamp="2025-11-26 13:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.53467283 +0000 UTC m=+98.170497922" watchObservedRunningTime="2025-11-26 13:25:14.535098014 +0000 UTC m=+98.170923116" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.575781 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=13.575757871 podStartE2EDuration="13.575757871s" podCreationTimestamp="2025-11-26 13:25:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.575259815 +0000 UTC m=+98.211084917" watchObservedRunningTime="2025-11-26 13:25:14.575757871 +0000 UTC m=+98.211582953" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.595699 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.595759 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.595818 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.595850 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.595894 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.595890 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.595918 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: E1126 13:25:14.596041 4695 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:25:14 crc kubenswrapper[4695]: E1126 13:25:14.596128 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs podName:755825f0-d565-4a02-8a54-8f9be77991d6 nodeName:}" failed. No retries permitted until 2025-11-26 13:26:18.596103875 +0000 UTC m=+162.231928977 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs") pod "network-metrics-daemon-l9n9h" (UID: "755825f0-d565-4a02-8a54-8f9be77991d6") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.596041 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.597203 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-service-ca\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.597835 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=78.597819219 podStartE2EDuration="1m18.597819219s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.597639334 +0000 UTC m=+98.233464416" watchObservedRunningTime="2025-11-26 13:25:14.597819219 +0000 UTC m=+98.233644311" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.604212 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.619628 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a5b995c-251e-4bbf-ad6f-d7baa46c6426-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-2rrxz\" (UID: \"0a5b995c-251e-4bbf-ad6f-d7baa46c6426\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.639905 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podStartSLOduration=78.639883261 podStartE2EDuration="1m18.639883261s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.639020285 +0000 UTC m=+98.274845367" watchObservedRunningTime="2025-11-26 13:25:14.639883261 +0000 UTC m=+98.275708353" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.723394 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.727866 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pslgh" podStartSLOduration=78.727853137 podStartE2EDuration="1m18.727853137s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.715926879 +0000 UTC m=+98.351751961" watchObservedRunningTime="2025-11-26 13:25:14.727853137 +0000 UTC m=+98.363678219" Nov 26 13:25:14 crc kubenswrapper[4695]: I1126 13:25:14.760850 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-hgtpx" podStartSLOduration=78.760832421 podStartE2EDuration="1m18.760832421s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:14.748179711 +0000 UTC m=+98.384004793" watchObservedRunningTime="2025-11-26 13:25:14.760832421 +0000 UTC m=+98.396657503" Nov 26 13:25:15 crc kubenswrapper[4695]: I1126 13:25:15.161302 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:15 crc kubenswrapper[4695]: I1126 13:25:15.161444 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:15 crc kubenswrapper[4695]: I1126 13:25:15.161462 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:15 crc kubenswrapper[4695]: E1126 13:25:15.162321 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:15 crc kubenswrapper[4695]: E1126 13:25:15.162147 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:15 crc kubenswrapper[4695]: E1126 13:25:15.162489 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:15 crc kubenswrapper[4695]: I1126 13:25:15.713115 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" event={"ID":"0a5b995c-251e-4bbf-ad6f-d7baa46c6426","Type":"ContainerStarted","Data":"a6a8d8bf79abf81a6d37405ff1759c1b3dafbada2aed6b02364747b780b58fbc"} Nov 26 13:25:15 crc kubenswrapper[4695]: I1126 13:25:15.713176 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" event={"ID":"0a5b995c-251e-4bbf-ad6f-d7baa46c6426","Type":"ContainerStarted","Data":"ef6337c215bf02c35e3d9766ba786311fdee632879fef2ded1aa00c25875fe0c"} Nov 26 13:25:15 crc kubenswrapper[4695]: I1126 13:25:15.731773 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-2rrxz" podStartSLOduration=79.731749075 podStartE2EDuration="1m19.731749075s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:15.730184316 +0000 UTC m=+99.366009398" watchObservedRunningTime="2025-11-26 13:25:15.731749075 +0000 UTC m=+99.367574157" Nov 26 13:25:16 crc kubenswrapper[4695]: I1126 13:25:16.161905 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:16 crc kubenswrapper[4695]: E1126 13:25:16.162449 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:17 crc kubenswrapper[4695]: I1126 13:25:17.162697 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:17 crc kubenswrapper[4695]: I1126 13:25:17.162713 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:17 crc kubenswrapper[4695]: I1126 13:25:17.162817 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:17 crc kubenswrapper[4695]: E1126 13:25:17.164852 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:17 crc kubenswrapper[4695]: E1126 13:25:17.165025 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:17 crc kubenswrapper[4695]: E1126 13:25:17.165100 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:18 crc kubenswrapper[4695]: I1126 13:25:18.162069 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:18 crc kubenswrapper[4695]: E1126 13:25:18.162479 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:19 crc kubenswrapper[4695]: I1126 13:25:19.161542 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:19 crc kubenswrapper[4695]: I1126 13:25:19.161650 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:19 crc kubenswrapper[4695]: I1126 13:25:19.161724 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:19 crc kubenswrapper[4695]: E1126 13:25:19.161659 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:19 crc kubenswrapper[4695]: E1126 13:25:19.161850 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:19 crc kubenswrapper[4695]: E1126 13:25:19.161918 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:20 crc kubenswrapper[4695]: I1126 13:25:20.161523 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:20 crc kubenswrapper[4695]: E1126 13:25:20.162069 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:21 crc kubenswrapper[4695]: I1126 13:25:21.161561 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:21 crc kubenswrapper[4695]: I1126 13:25:21.161580 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:21 crc kubenswrapper[4695]: I1126 13:25:21.161590 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:21 crc kubenswrapper[4695]: E1126 13:25:21.162110 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:21 crc kubenswrapper[4695]: E1126 13:25:21.162274 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:21 crc kubenswrapper[4695]: E1126 13:25:21.162370 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:22 crc kubenswrapper[4695]: I1126 13:25:22.161641 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:22 crc kubenswrapper[4695]: E1126 13:25:22.161848 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:23 crc kubenswrapper[4695]: I1126 13:25:23.161817 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:23 crc kubenswrapper[4695]: I1126 13:25:23.161916 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:23 crc kubenswrapper[4695]: E1126 13:25:23.162019 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:23 crc kubenswrapper[4695]: I1126 13:25:23.162141 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:23 crc kubenswrapper[4695]: E1126 13:25:23.162340 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:23 crc kubenswrapper[4695]: E1126 13:25:23.163065 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:24 crc kubenswrapper[4695]: I1126 13:25:24.162432 4695 scope.go:117] "RemoveContainer" containerID="fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5" Nov 26 13:25:24 crc kubenswrapper[4695]: I1126 13:25:24.161740 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:24 crc kubenswrapper[4695]: E1126 13:25:24.163160 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-qc7jt_openshift-ovn-kubernetes(5fa56d8f-ad6a-4761-ad93-58a109b0a9a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" Nov 26 13:25:24 crc kubenswrapper[4695]: E1126 13:25:24.163824 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:25 crc kubenswrapper[4695]: I1126 13:25:25.161838 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:25 crc kubenswrapper[4695]: E1126 13:25:25.162039 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:25 crc kubenswrapper[4695]: I1126 13:25:25.161853 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:25 crc kubenswrapper[4695]: I1126 13:25:25.162305 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:25 crc kubenswrapper[4695]: E1126 13:25:25.162455 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:25 crc kubenswrapper[4695]: E1126 13:25:25.162586 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:26 crc kubenswrapper[4695]: I1126 13:25:26.161505 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:26 crc kubenswrapper[4695]: E1126 13:25:26.161659 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:27 crc kubenswrapper[4695]: I1126 13:25:27.161250 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:27 crc kubenswrapper[4695]: I1126 13:25:27.161299 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:27 crc kubenswrapper[4695]: E1126 13:25:27.162208 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:27 crc kubenswrapper[4695]: I1126 13:25:27.162225 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:27 crc kubenswrapper[4695]: E1126 13:25:27.162271 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:27 crc kubenswrapper[4695]: E1126 13:25:27.162326 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:28 crc kubenswrapper[4695]: I1126 13:25:28.161546 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:28 crc kubenswrapper[4695]: E1126 13:25:28.161715 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:29 crc kubenswrapper[4695]: I1126 13:25:29.162073 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:29 crc kubenswrapper[4695]: I1126 13:25:29.162182 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:29 crc kubenswrapper[4695]: E1126 13:25:29.162238 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:29 crc kubenswrapper[4695]: E1126 13:25:29.162414 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:29 crc kubenswrapper[4695]: I1126 13:25:29.162528 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:29 crc kubenswrapper[4695]: E1126 13:25:29.162620 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:30 crc kubenswrapper[4695]: I1126 13:25:30.161569 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:30 crc kubenswrapper[4695]: E1126 13:25:30.161788 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.161468 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.162198 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:31 crc kubenswrapper[4695]: E1126 13:25:31.162422 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.162499 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:31 crc kubenswrapper[4695]: E1126 13:25:31.162776 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:31 crc kubenswrapper[4695]: E1126 13:25:31.162974 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.773162 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/1.log" Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.773839 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/0.log" Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.773925 4695 generic.go:334] "Generic (PLEG): container finished" podID="133aab88-6958-4575-aefd-c4675266edd5" containerID="d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27" exitCode=1 Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.773979 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerDied","Data":"d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27"} Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.774034 4695 scope.go:117] "RemoveContainer" containerID="92c9e6b7341fe87a1b784522ffc2f4650f8804be13c17acf4e062cba11ea14b0" Nov 26 13:25:31 crc kubenswrapper[4695]: I1126 13:25:31.774605 4695 scope.go:117] "RemoveContainer" containerID="d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27" Nov 26 13:25:31 crc kubenswrapper[4695]: E1126 13:25:31.774844 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-hgtpx_openshift-multus(133aab88-6958-4575-aefd-c4675266edd5)\"" pod="openshift-multus/multus-hgtpx" podUID="133aab88-6958-4575-aefd-c4675266edd5" Nov 26 13:25:32 crc kubenswrapper[4695]: I1126 13:25:32.161609 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:32 crc kubenswrapper[4695]: E1126 13:25:32.161795 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:32 crc kubenswrapper[4695]: I1126 13:25:32.779451 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/1.log" Nov 26 13:25:33 crc kubenswrapper[4695]: I1126 13:25:33.161403 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:33 crc kubenswrapper[4695]: I1126 13:25:33.161449 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:33 crc kubenswrapper[4695]: I1126 13:25:33.161511 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:33 crc kubenswrapper[4695]: E1126 13:25:33.162102 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:33 crc kubenswrapper[4695]: E1126 13:25:33.161925 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:33 crc kubenswrapper[4695]: E1126 13:25:33.162176 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:34 crc kubenswrapper[4695]: I1126 13:25:34.161185 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:34 crc kubenswrapper[4695]: E1126 13:25:34.161339 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:35 crc kubenswrapper[4695]: I1126 13:25:35.162031 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:35 crc kubenswrapper[4695]: I1126 13:25:35.162113 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:35 crc kubenswrapper[4695]: I1126 13:25:35.162031 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:35 crc kubenswrapper[4695]: E1126 13:25:35.162276 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:35 crc kubenswrapper[4695]: E1126 13:25:35.162484 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:35 crc kubenswrapper[4695]: E1126 13:25:35.162591 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:36 crc kubenswrapper[4695]: I1126 13:25:36.161565 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:36 crc kubenswrapper[4695]: E1126 13:25:36.161742 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:37 crc kubenswrapper[4695]: E1126 13:25:37.138424 4695 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 26 13:25:37 crc kubenswrapper[4695]: I1126 13:25:37.161556 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:37 crc kubenswrapper[4695]: I1126 13:25:37.162141 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:37 crc kubenswrapper[4695]: E1126 13:25:37.162851 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:37 crc kubenswrapper[4695]: E1126 13:25:37.163006 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:37 crc kubenswrapper[4695]: I1126 13:25:37.163238 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:37 crc kubenswrapper[4695]: E1126 13:25:37.163638 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:37 crc kubenswrapper[4695]: E1126 13:25:37.277474 4695 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 13:25:38 crc kubenswrapper[4695]: I1126 13:25:38.161402 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:38 crc kubenswrapper[4695]: E1126 13:25:38.161704 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.161766 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:39 crc kubenswrapper[4695]: E1126 13:25:39.162669 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.161975 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.161923 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:39 crc kubenswrapper[4695]: E1126 13:25:39.162803 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:39 crc kubenswrapper[4695]: E1126 13:25:39.162980 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.163522 4695 scope.go:117] "RemoveContainer" containerID="fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.804125 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/3.log" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.807582 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerStarted","Data":"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651"} Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.808368 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.846482 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podStartSLOduration=103.846452874 podStartE2EDuration="1m43.846452874s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:39.843731798 +0000 UTC m=+123.479556970" watchObservedRunningTime="2025-11-26 13:25:39.846452874 +0000 UTC m=+123.482277996" Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.984592 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l9n9h"] Nov 26 13:25:39 crc kubenswrapper[4695]: I1126 13:25:39.984843 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:39 crc kubenswrapper[4695]: E1126 13:25:39.985169 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:41 crc kubenswrapper[4695]: I1126 13:25:41.162192 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:41 crc kubenswrapper[4695]: E1126 13:25:41.163498 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:41 crc kubenswrapper[4695]: I1126 13:25:41.162962 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:41 crc kubenswrapper[4695]: E1126 13:25:41.163673 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:41 crc kubenswrapper[4695]: I1126 13:25:41.163218 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:41 crc kubenswrapper[4695]: E1126 13:25:41.163794 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:41 crc kubenswrapper[4695]: I1126 13:25:41.162886 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:41 crc kubenswrapper[4695]: E1126 13:25:41.163936 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:42 crc kubenswrapper[4695]: E1126 13:25:42.312708 4695 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 26 13:25:43 crc kubenswrapper[4695]: I1126 13:25:43.161339 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:43 crc kubenswrapper[4695]: I1126 13:25:43.161434 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:43 crc kubenswrapper[4695]: E1126 13:25:43.161484 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:43 crc kubenswrapper[4695]: E1126 13:25:43.161567 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:43 crc kubenswrapper[4695]: I1126 13:25:43.162021 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:43 crc kubenswrapper[4695]: E1126 13:25:43.162400 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:43 crc kubenswrapper[4695]: I1126 13:25:43.162630 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:43 crc kubenswrapper[4695]: E1126 13:25:43.162891 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:45 crc kubenswrapper[4695]: I1126 13:25:45.161535 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:45 crc kubenswrapper[4695]: I1126 13:25:45.161578 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:45 crc kubenswrapper[4695]: I1126 13:25:45.161601 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:45 crc kubenswrapper[4695]: E1126 13:25:45.161797 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:45 crc kubenswrapper[4695]: I1126 13:25:45.161829 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:45 crc kubenswrapper[4695]: E1126 13:25:45.161992 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:45 crc kubenswrapper[4695]: E1126 13:25:45.162096 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:45 crc kubenswrapper[4695]: E1126 13:25:45.162178 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:45 crc kubenswrapper[4695]: I1126 13:25:45.162687 4695 scope.go:117] "RemoveContainer" containerID="d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27" Nov 26 13:25:45 crc kubenswrapper[4695]: I1126 13:25:45.831848 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/1.log" Nov 26 13:25:45 crc kubenswrapper[4695]: I1126 13:25:45.832162 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerStarted","Data":"d9b8fb0a2c9c23dba8b2b9dea6fb19868eeaee0f8b68596c22b1d94167eefaab"} Nov 26 13:25:47 crc kubenswrapper[4695]: I1126 13:25:47.162221 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:47 crc kubenswrapper[4695]: I1126 13:25:47.162325 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:47 crc kubenswrapper[4695]: I1126 13:25:47.162389 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:47 crc kubenswrapper[4695]: I1126 13:25:47.163736 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:47 crc kubenswrapper[4695]: E1126 13:25:47.163719 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 26 13:25:47 crc kubenswrapper[4695]: E1126 13:25:47.163970 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-l9n9h" podUID="755825f0-d565-4a02-8a54-8f9be77991d6" Nov 26 13:25:47 crc kubenswrapper[4695]: E1126 13:25:47.164257 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 26 13:25:47 crc kubenswrapper[4695]: E1126 13:25:47.164336 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.162282 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.162330 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.162471 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.162562 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.165629 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.165629 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.166667 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.166994 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.167078 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 26 13:25:49 crc kubenswrapper[4695]: I1126 13:25:49.167752 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 26 13:25:53 crc kubenswrapper[4695]: I1126 13:25:53.753153 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.354960 4695 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.392740 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5mdsv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.393118 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.397734 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.399183 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.399666 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.400515 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.400726 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.401107 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.401395 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.401864 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.402003 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-72w4t"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.402330 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.403051 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxwsj"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.403478 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.406592 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407024 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407203 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407378 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407478 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407514 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407864 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407891 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.407974 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.408044 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.412081 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-x97bq"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.412576 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x97bq" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.414201 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.414291 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-skqtf"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.414447 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.414592 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.414708 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.427071 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.427077 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.427685 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.428006 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.430657 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.430825 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.430964 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.431208 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.431778 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.432066 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.432332 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.432559 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.432804 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.432958 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.433112 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.435406 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.435542 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.436636 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.436797 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.436960 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.437336 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.437509 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.437578 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.437709 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.440026 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.440920 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.440990 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.441681 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.442806 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mxj48"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.442976 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.443143 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.443317 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.443483 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.443524 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.443789 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.444382 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.444442 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.444491 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.444551 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.444492 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.444738 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.444857 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.445188 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.446983 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dl9mv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.447993 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.448079 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.448251 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.448700 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.450994 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.451213 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g7rk6"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.451782 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.453394 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.453967 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.455421 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.456422 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.456477 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.457179 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.458821 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4jcnm"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.459926 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.461844 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4fqcc"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.462400 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.465023 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.465310 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.465675 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.466058 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.466080 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.466144 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.466216 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.466313 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.467277 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6w4lm"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.468131 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.468813 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.469370 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.469623 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5mdsv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.481817 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.482269 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.482340 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.490565 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-t8w9k"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.491174 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.491496 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fd6bm"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.491532 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.491857 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.492120 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.492273 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.502495 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.505040 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.509305 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.528559 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.528957 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.529135 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.529512 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.529707 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.529798 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.529830 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.529937 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530004 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530045 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530094 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530291 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530580 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530786 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530903 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.531027 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.531251 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.531524 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.530624 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.531831 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.532460 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.532965 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.533260 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.533883 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.534430 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.534958 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.535099 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.535657 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.535929 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.536004 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.536162 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-config\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.536197 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-config\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.536416 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.538920 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-audit-dir\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.539173 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.539804 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdfbv\" (UniqueName: \"kubernetes.io/projected/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-kube-api-access-fdfbv\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.539865 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.539894 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-client-ca\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.539928 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2aacf7d4-37f8-467c-bc7d-dc773cab58d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9tvcn\" (UID: \"2aacf7d4-37f8-467c-bc7d-dc773cab58d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.539949 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-audit-dir\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.539986 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540005 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983891ce-aeec-413f-ab55-ac0789f59708-config\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540020 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-serving-cert\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540035 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4gbz\" (UniqueName: \"kubernetes.io/projected/7c5213f4-2ee5-4136-b62c-7b291044e467-kube-api-access-p4gbz\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540057 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-client-ca\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540090 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-config\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540123 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-node-pullsecrets\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540215 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76h8r\" (UniqueName: \"kubernetes.io/projected/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-kube-api-access-76h8r\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540248 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w76pq\" (UniqueName: \"kubernetes.io/projected/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-kube-api-access-w76pq\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540268 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-serving-cert\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540284 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-encryption-config\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540285 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540299 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5213f4-2ee5-4136-b62c-7b291044e467-serving-cert\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540426 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540435 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-audit-policies\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540462 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-images\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540503 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-audit\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540537 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tmh\" (UniqueName: \"kubernetes.io/projected/ce25ffe5-b9fd-49f6-b465-54a2e2cc1441-kube-api-access-w5tmh\") pod \"dns-operator-744455d44c-skqtf\" (UID: \"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441\") " pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540558 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc825\" (UniqueName: \"kubernetes.io/projected/6b0b4bb2-6319-4f1b-ba1c-80256970147d-kube-api-access-lc825\") pod \"downloads-7954f5f757-x97bq\" (UID: \"6b0b4bb2-6319-4f1b-ba1c-80256970147d\") " pod="openshift-console/downloads-7954f5f757-x97bq" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540572 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-etcd-client\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540591 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnhdf\" (UniqueName: \"kubernetes.io/projected/983891ce-aeec-413f-ab55-ac0789f59708-kube-api-access-vnhdf\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540606 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-image-import-ca\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540621 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/983891ce-aeec-413f-ab55-ac0789f59708-auth-proxy-config\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540635 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce25ffe5-b9fd-49f6-b465-54a2e2cc1441-metrics-tls\") pod \"dns-operator-744455d44c-skqtf\" (UID: \"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441\") " pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540656 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-config\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540672 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540694 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqh6k\" (UniqueName: \"kubernetes.io/projected/2aacf7d4-37f8-467c-bc7d-dc773cab58d3-kube-api-access-qqh6k\") pod \"cluster-samples-operator-665b6dd947-9tvcn\" (UID: \"2aacf7d4-37f8-467c-bc7d-dc773cab58d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540709 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/983891ce-aeec-413f-ab55-ac0789f59708-machine-approver-tls\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540754 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-etcd-client\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540780 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540804 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f633ea2-f78f-4a36-8f28-13c2f053c349-serving-cert\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540830 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlq8c\" (UniqueName: \"kubernetes.io/projected/7f633ea2-f78f-4a36-8f28-13c2f053c349-kube-api-access-hlq8c\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540849 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.540866 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-encryption-config\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.541013 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.541437 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.541498 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.551722 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.553451 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.553654 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.553971 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.554465 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.554493 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.554468 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.555101 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.560433 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.564144 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.565231 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.566216 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.568914 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.568936 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.569118 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.569643 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.570286 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.571039 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.571105 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.572514 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.572757 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x97bq"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.574960 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.575376 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nch9b"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.575722 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.575842 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzxm6"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.575907 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.576700 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.578055 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.582229 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.584121 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.590789 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.592064 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.592556 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.597471 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.605483 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-skqtf"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.608103 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.609256 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-72w4t"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.611861 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.613959 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dl9mv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.617008 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mxj48"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.617039 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-65ddv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.617796 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.618797 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g7rk6"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.618808 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.620596 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-4j967"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.621690 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4j967" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.621806 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4jcnm"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.622739 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fd6bm"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.623859 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxwsj"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.625179 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.626307 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.627711 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.628738 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.629905 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nch9b"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.631936 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.632740 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.633870 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.636700 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.636668 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.637397 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4fqcc"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.638820 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.642205 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.642639 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-service-ca\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.642800 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-etcd-client\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.642874 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.642940 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f633ea2-f78f-4a36-8f28-13c2f053c349-serving-cert\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643015 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b73de046-eed4-42dc-99ac-37febbf86b98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643087 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-oauth-serving-cert\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643162 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-encryption-config\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643233 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlq8c\" (UniqueName: \"kubernetes.io/projected/7f633ea2-f78f-4a36-8f28-13c2f053c349-kube-api-access-hlq8c\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643304 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643388 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-config\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643555 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-serving-cert\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.643642 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ttsr\" (UniqueName: \"kubernetes.io/projected/11a6c05a-5247-4b9c-9fb7-3061240e6953-kube-api-access-9ttsr\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.644736 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-config\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.644839 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-audit-dir\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.644922 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-config\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.644994 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdr2\" (UniqueName: \"kubernetes.io/projected/b73de046-eed4-42dc-99ac-37febbf86b98-kube-api-access-gtdr2\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645065 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-config\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645143 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645220 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdfbv\" (UniqueName: \"kubernetes.io/projected/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-kube-api-access-fdfbv\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645291 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-client-ca\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645449 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645537 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tchg\" (UniqueName: \"kubernetes.io/projected/19892b0f-6ec5-4376-a584-c23b3938ed82-kube-api-access-8tchg\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645625 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-audit-dir\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645701 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19892b0f-6ec5-4376-a584-c23b3938ed82-serving-cert\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645784 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2aacf7d4-37f8-467c-bc7d-dc773cab58d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9tvcn\" (UID: \"2aacf7d4-37f8-467c-bc7d-dc773cab58d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645918 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-etcd-serving-ca\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.644159 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6w4lm"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645962 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-audit-dir\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645152 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-audit-dir\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645998 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646018 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.644108 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-trusted-ca-bundle\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645104 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646186 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-config\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646266 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-client-ca\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.645867 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-config\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646471 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-serving-cert\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646566 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646672 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983891ce-aeec-413f-ab55-ac0789f59708-config\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646757 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4gbz\" (UniqueName: \"kubernetes.io/projected/7c5213f4-2ee5-4136-b62c-7b291044e467-kube-api-access-p4gbz\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646835 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/11a6c05a-5247-4b9c-9fb7-3061240e6953-srv-cert\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.646955 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-client-ca\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647033 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-node-pullsecrets\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647120 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-config\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647172 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76h8r\" (UniqueName: \"kubernetes.io/projected/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-kube-api-access-76h8r\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647206 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b73de046-eed4-42dc-99ac-37febbf86b98-trusted-ca\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647225 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647235 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-trusted-ca-bundle\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647283 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983891ce-aeec-413f-ab55-ac0789f59708-config\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647291 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-node-pullsecrets\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647328 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-serving-cert\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647370 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-encryption-config\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647397 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5213f4-2ee5-4136-b62c-7b291044e467-serving-cert\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647426 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-ca\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647454 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w76pq\" (UniqueName: \"kubernetes.io/projected/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-kube-api-access-w76pq\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647478 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-audit-policies\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647546 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-images\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647627 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-audit\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647633 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-client-ca\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647689 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5tmh\" (UniqueName: \"kubernetes.io/projected/ce25ffe5-b9fd-49f6-b465-54a2e2cc1441-kube-api-access-w5tmh\") pod \"dns-operator-744455d44c-skqtf\" (UID: \"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441\") " pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647776 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-service-ca\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647810 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc825\" (UniqueName: \"kubernetes.io/projected/6b0b4bb2-6319-4f1b-ba1c-80256970147d-kube-api-access-lc825\") pod \"downloads-7954f5f757-x97bq\" (UID: \"6b0b4bb2-6319-4f1b-ba1c-80256970147d\") " pod="openshift-console/downloads-7954f5f757-x97bq" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647867 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-client\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647939 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mxhn\" (UniqueName: \"kubernetes.io/projected/d84d0827-d7fe-42eb-adbe-eda35247c26c-kube-api-access-9mxhn\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647971 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-etcd-client\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.647998 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/11a6c05a-5247-4b9c-9fb7-3061240e6953-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648029 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b73de046-eed4-42dc-99ac-37febbf86b98-metrics-tls\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648053 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-oauth-config\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648090 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnhdf\" (UniqueName: \"kubernetes.io/projected/983891ce-aeec-413f-ab55-ac0789f59708-kube-api-access-vnhdf\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648115 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce25ffe5-b9fd-49f6-b465-54a2e2cc1441-metrics-tls\") pod \"dns-operator-744455d44c-skqtf\" (UID: \"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441\") " pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648177 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-config\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648201 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648226 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-image-import-ca\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648250 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/983891ce-aeec-413f-ab55-ac0789f59708-auth-proxy-config\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648466 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqh6k\" (UniqueName: \"kubernetes.io/projected/2aacf7d4-37f8-467c-bc7d-dc773cab58d3-kube-api-access-qqh6k\") pod \"cluster-samples-operator-665b6dd947-9tvcn\" (UID: \"2aacf7d4-37f8-467c-bc7d-dc773cab58d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648477 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-audit-policies\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648501 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/983891ce-aeec-413f-ab55-ac0789f59708-machine-approver-tls\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648225 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-config\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.648998 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-images\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.649467 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.649499 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/983891ce-aeec-413f-ab55-ac0789f59708-auth-proxy-config\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.650664 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-image-import-ca\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.650789 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.651644 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-encryption-config\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.651651 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-serving-cert\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.651558 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-config\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.652135 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-serving-cert\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.652812 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.653107 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-audit\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.653819 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.653997 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.654816 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce25ffe5-b9fd-49f6-b465-54a2e2cc1441-metrics-tls\") pod \"dns-operator-744455d44c-skqtf\" (UID: \"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441\") " pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.654865 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/983891ce-aeec-413f-ab55-ac0789f59708-machine-approver-tls\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.655268 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-etcd-client\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.655389 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5213f4-2ee5-4136-b62c-7b291044e467-serving-cert\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.655542 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/2aacf7d4-37f8-467c-bc7d-dc773cab58d3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-9tvcn\" (UID: \"2aacf7d4-37f8-467c-bc7d-dc773cab58d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.655628 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f633ea2-f78f-4a36-8f28-13c2f053c349-serving-cert\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.655879 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.656541 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.657732 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-etcd-client\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.658213 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-encryption-config\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.659409 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.661134 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-65ddv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.662377 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzxm6"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.663463 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4j967"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.664953 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5ddpz"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.667022 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.667046 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5ddpz"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.667140 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.667903 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-gtfpw"] Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.668458 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.676117 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.695581 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.715669 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.741728 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.749875 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b73de046-eed4-42dc-99ac-37febbf86b98-metrics-tls\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.749921 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/11a6c05a-5247-4b9c-9fb7-3061240e6953-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.749959 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-oauth-config\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.749996 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-service-ca\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750023 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b73de046-eed4-42dc-99ac-37febbf86b98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750064 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-oauth-serving-cert\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750102 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-serving-cert\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750125 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ttsr\" (UniqueName: \"kubernetes.io/projected/11a6c05a-5247-4b9c-9fb7-3061240e6953-kube-api-access-9ttsr\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750157 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-config\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750181 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-config\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750201 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtdr2\" (UniqueName: \"kubernetes.io/projected/b73de046-eed4-42dc-99ac-37febbf86b98-kube-api-access-gtdr2\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750236 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tchg\" (UniqueName: \"kubernetes.io/projected/19892b0f-6ec5-4376-a584-c23b3938ed82-kube-api-access-8tchg\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750272 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19892b0f-6ec5-4376-a584-c23b3938ed82-serving-cert\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750305 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/11a6c05a-5247-4b9c-9fb7-3061240e6953-srv-cert\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750367 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b73de046-eed4-42dc-99ac-37febbf86b98-trusted-ca\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750388 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-trusted-ca-bundle\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750411 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-ca\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750472 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-service-ca\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750499 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-client\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.750522 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mxhn\" (UniqueName: \"kubernetes.io/projected/d84d0827-d7fe-42eb-adbe-eda35247c26c-kube-api-access-9mxhn\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.751076 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-service-ca\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.751618 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-oauth-serving-cert\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.752835 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-config\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.753031 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-service-ca\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.753280 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b73de046-eed4-42dc-99ac-37febbf86b98-trusted-ca\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.753469 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-config\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.753675 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-trusted-ca-bundle\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.754483 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-oauth-config\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.755416 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.756014 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-client\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.756356 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/19892b0f-6ec5-4376-a584-c23b3938ed82-etcd-ca\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.756425 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b73de046-eed4-42dc-99ac-37febbf86b98-metrics-tls\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.758233 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/19892b0f-6ec5-4376-a584-c23b3938ed82-serving-cert\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.775313 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-serving-cert\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.776441 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.795188 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.814814 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.835911 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.855204 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.875473 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.895742 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.914643 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.935938 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.955506 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.975876 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 26 13:25:55 crc kubenswrapper[4695]: I1126 13:25:55.995132 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.015726 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.035590 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.055416 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.075883 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.096449 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.116248 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.135595 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.176134 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.195555 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.217275 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.235808 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.247143 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/11a6c05a-5247-4b9c-9fb7-3061240e6953-srv-cert\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.256390 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.263959 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/11a6c05a-5247-4b9c-9fb7-3061240e6953-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.275616 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.315744 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.336574 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.356056 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.375712 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.396461 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.415957 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.436034 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.455376 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.475967 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.497675 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.515523 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.535612 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.556708 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.574281 4695 request.go:700] Waited for 1.019308434s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpackage-server-manager-serving-cert&limit=500&resourceVersion=0 Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.576312 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.596476 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.617209 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.635813 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.655899 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.675972 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.696609 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.715956 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.735765 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.756422 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.775792 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.795479 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.818810 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.836458 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.855336 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.876081 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.895557 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.915165 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.935554 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.956180 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 26 13:25:56 crc kubenswrapper[4695]: I1126 13:25:56.977423 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.001545 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.015804 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.035842 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.055942 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.075415 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.096188 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.115990 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.136580 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.155720 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.176271 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.195863 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.216110 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.236547 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.256506 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.275440 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.295551 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.316598 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.362283 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlq8c\" (UniqueName: \"kubernetes.io/projected/7f633ea2-f78f-4a36-8f28-13c2f053c349-kube-api-access-hlq8c\") pod \"route-controller-manager-6576b87f9c-8q8mp\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.382194 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdfbv\" (UniqueName: \"kubernetes.io/projected/6f93277a-4f74-4839-8b28-2ff1bfd6f7ca-kube-api-access-fdfbv\") pod \"machine-api-operator-5694c8668f-72w4t\" (UID: \"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.392248 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4gbz\" (UniqueName: \"kubernetes.io/projected/7c5213f4-2ee5-4136-b62c-7b291044e467-kube-api-access-p4gbz\") pod \"controller-manager-879f6c89f-5mdsv\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.423736 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76h8r\" (UniqueName: \"kubernetes.io/projected/f4397fbb-62ef-4e2f-9ace-6c76a6e49f85-kube-api-access-76h8r\") pod \"apiserver-76f77b778f-jxwsj\" (UID: \"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85\") " pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.435933 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w76pq\" (UniqueName: \"kubernetes.io/projected/2dc1db4a-8e2b-4e1f-909c-08e8ee336041-kube-api-access-w76pq\") pod \"apiserver-7bbb656c7d-2zqsx\" (UID: \"2dc1db4a-8e2b-4e1f-909c-08e8ee336041\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.453100 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5tmh\" (UniqueName: \"kubernetes.io/projected/ce25ffe5-b9fd-49f6-b465-54a2e2cc1441-kube-api-access-w5tmh\") pod \"dns-operator-744455d44c-skqtf\" (UID: \"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441\") " pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.474580 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc825\" (UniqueName: \"kubernetes.io/projected/6b0b4bb2-6319-4f1b-ba1c-80256970147d-kube-api-access-lc825\") pod \"downloads-7954f5f757-x97bq\" (UID: \"6b0b4bb2-6319-4f1b-ba1c-80256970147d\") " pod="openshift-console/downloads-7954f5f757-x97bq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.495847 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqh6k\" (UniqueName: \"kubernetes.io/projected/2aacf7d4-37f8-467c-bc7d-dc773cab58d3-kube-api-access-qqh6k\") pod \"cluster-samples-operator-665b6dd947-9tvcn\" (UID: \"2aacf7d4-37f8-467c-bc7d-dc773cab58d3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.514184 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.517165 4695 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.521508 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnhdf\" (UniqueName: \"kubernetes.io/projected/983891ce-aeec-413f-ab55-ac0789f59708-kube-api-access-vnhdf\") pod \"machine-approver-56656f9798-sjzmc\" (UID: \"983891ce-aeec-413f-ab55-ac0789f59708\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.535113 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.535560 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.554440 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.556844 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 26 13:25:57 crc kubenswrapper[4695]: W1126 13:25:57.573807 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod983891ce_aeec_413f_ab55_ac0789f59708.slice/crio-f496e5e51e63e19332a2309a7c0ffd7cc7fd106ccfa68088777760ef8c1f4369 WatchSource:0}: Error finding container f496e5e51e63e19332a2309a7c0ffd7cc7fd106ccfa68088777760ef8c1f4369: Status 404 returned error can't find the container with id f496e5e51e63e19332a2309a7c0ffd7cc7fd106ccfa68088777760ef8c1f4369 Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.576192 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.587014 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.592782 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.593329 4695 request.go:700] Waited for 1.924647615s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-tls&limit=500&resourceVersion=0 Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.598639 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.612389 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.615564 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.658712 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mxhn\" (UniqueName: \"kubernetes.io/projected/d84d0827-d7fe-42eb-adbe-eda35247c26c-kube-api-access-9mxhn\") pod \"console-f9d7485db-4fqcc\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.674017 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.681413 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-x97bq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.681876 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b73de046-eed4-42dc-99ac-37febbf86b98-bound-sa-token\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.694711 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ttsr\" (UniqueName: \"kubernetes.io/projected/11a6c05a-5247-4b9c-9fb7-3061240e6953-kube-api-access-9ttsr\") pod \"olm-operator-6b444d44fb-hrqkb\" (UID: \"11a6c05a-5247-4b9c-9fb7-3061240e6953\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.698544 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.725192 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtdr2\" (UniqueName: \"kubernetes.io/projected/b73de046-eed4-42dc-99ac-37febbf86b98-kube-api-access-gtdr2\") pod \"ingress-operator-5b745b69d9-zfxpp\" (UID: \"b73de046-eed4-42dc-99ac-37febbf86b98\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.734444 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tchg\" (UniqueName: \"kubernetes.io/projected/19892b0f-6ec5-4376-a584-c23b3938ed82-kube-api-access-8tchg\") pod \"etcd-operator-b45778765-4jcnm\" (UID: \"19892b0f-6ec5-4376-a584-c23b3938ed82\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.754270 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775276 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c61b802-51ae-4857-b4ee-6845ea44d69e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fd6bm\" (UID: \"6c61b802-51ae-4857-b4ee-6845ea44d69e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775402 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775426 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775454 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775471 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-audit-policies\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775509 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-trusted-ca\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775531 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-tls\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775568 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775603 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775630 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775656 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxqrc\" (UniqueName: \"kubernetes.io/projected/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-kube-api-access-pxqrc\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775674 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/395c4612-671d-4cd4-9770-4dd41789c3c1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775693 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-images\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775710 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79smd\" (UniqueName: \"kubernetes.io/projected/0df7974d-d44f-4c17-b4a4-afcef9078807-kube-api-access-79smd\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775741 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-certificates\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775758 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775778 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-proxy-tls\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775829 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-service-ca-bundle\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775858 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af661467-8667-4b09-ae7a-9ad7c99a7de5-config\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775911 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775934 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af661467-8667-4b09-ae7a-9ad7c99a7de5-serving-cert\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.775964 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-bound-sa-token\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776004 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59f9ffa6-7e49-4182-a02c-9de8f1010928-serving-cert\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776020 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3618463d-9c33-4b4c-980f-1a91fca41cbe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776036 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vhk9\" (UniqueName: \"kubernetes.io/projected/6c61b802-51ae-4857-b4ee-6845ea44d69e-kube-api-access-2vhk9\") pod \"multus-admission-controller-857f4d67dd-fd6bm\" (UID: \"6c61b802-51ae-4857-b4ee-6845ea44d69e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776060 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mp7d\" (UniqueName: \"kubernetes.io/projected/af661467-8667-4b09-ae7a-9ad7c99a7de5-kube-api-access-8mp7d\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776095 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776123 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776143 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3618463d-9c33-4b4c-980f-1a91fca41cbe-serving-cert\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776167 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776212 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nchxc\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-kube-api-access-nchxc\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776248 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-default-certificate\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776269 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776290 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdsjt\" (UniqueName: \"kubernetes.io/projected/eacdeb1b-093e-4110-b6a6-b2605bf167ee-kube-api-access-cdsjt\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776314 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776336 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df7974d-d44f-4c17-b4a4-afcef9078807-service-ca-bundle\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776413 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776441 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-stats-auth\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776467 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eacdeb1b-093e-4110-b6a6-b2605bf167ee-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776506 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/395c4612-671d-4cd4-9770-4dd41789c3c1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776561 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vtv6\" (UniqueName: \"kubernetes.io/projected/29726719-a46b-4403-b241-5397d624f714-kube-api-access-2vtv6\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776612 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-metrics-certs\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776633 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-config\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776658 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af661467-8667-4b09-ae7a-9ad7c99a7de5-trusted-ca\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776691 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85kcg\" (UniqueName: \"kubernetes.io/projected/59f9ffa6-7e49-4182-a02c-9de8f1010928-kube-api-access-85kcg\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776756 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776789 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eacdeb1b-093e-4110-b6a6-b2605bf167ee-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776824 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29726719-a46b-4403-b241-5397d624f714-audit-dir\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776842 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776857 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8mmb\" (UniqueName: \"kubernetes.io/projected/395c4612-671d-4cd4-9770-4dd41789c3c1-kube-api-access-l8mmb\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776882 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776904 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w27qm\" (UniqueName: \"kubernetes.io/projected/3618463d-9c33-4b4c-980f-1a91fca41cbe-kube-api-access-w27qm\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776952 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.776981 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjkvp\" (UniqueName: \"kubernetes.io/projected/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-kube-api-access-jjkvp\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.777002 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: E1126 13:25:57.779811 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.279792745 +0000 UTC m=+141.915617827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.783009 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx"] Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.795946 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5mdsv"] Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.795968 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.803199 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.837274 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880015 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880560 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880591 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880638 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880655 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbckd\" (UniqueName: \"kubernetes.io/projected/ec75b248-8586-4030-a552-86eba44b36fa-kube-api-access-hbckd\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880672 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3618463d-9c33-4b4c-980f-1a91fca41cbe-serving-cert\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880692 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880719 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z64kh\" (UniqueName: \"kubernetes.io/projected/d0b3825e-da05-40b6-ac8d-f469e634a019-kube-api-access-z64kh\") pod \"package-server-manager-789f6589d5-qckcf\" (UID: \"d0b3825e-da05-40b6-ac8d-f469e634a019\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880734 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-plugins-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880750 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nchxc\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-kube-api-access-nchxc\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880764 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38941a1c-fbd3-4909-b5f3-2128059e9d95-srv-cert\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880780 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf75w\" (UniqueName: \"kubernetes.io/projected/bd88959e-7f4f-437a-8084-58848727bfdb-kube-api-access-cf75w\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880796 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-default-certificate\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880813 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880833 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdsjt\" (UniqueName: \"kubernetes.io/projected/eacdeb1b-093e-4110-b6a6-b2605bf167ee-kube-api-access-cdsjt\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880849 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df7974d-d44f-4c17-b4a4-afcef9078807-service-ca-bundle\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880863 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880880 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941f9959-8397-4065-bbbf-79ef68aa8148-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880896 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880912 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-socket-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880929 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eacdeb1b-093e-4110-b6a6-b2605bf167ee-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880952 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-stats-auth\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880974 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec75b248-8586-4030-a552-86eba44b36fa-metrics-tls\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.880995 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a9b369-2369-4fe3-9568-ada564d1c2a6-secret-volume\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881013 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp4nn\" (UniqueName: \"kubernetes.io/projected/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-kube-api-access-mp4nn\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881036 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/395c4612-671d-4cd4-9770-4dd41789c3c1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881075 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vtv6\" (UniqueName: \"kubernetes.io/projected/29726719-a46b-4403-b241-5397d624f714-kube-api-access-2vtv6\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881092 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-metrics-certs\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881117 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-proxy-tls\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881140 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-config\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881156 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af661467-8667-4b09-ae7a-9ad7c99a7de5-trusted-ca\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881190 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-csi-data-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881208 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334b6e1b-3607-49f1-9c30-3a22590c2bfa-config\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881250 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85kcg\" (UniqueName: \"kubernetes.io/projected/59f9ffa6-7e49-4182-a02c-9de8f1010928-kube-api-access-85kcg\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881273 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062a5bcb-3e66-473a-b594-5cf089a9fb73-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881298 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881320 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9fx4\" (UniqueName: \"kubernetes.io/projected/dd6ca3ba-1c2b-43b4-b324-a197ad4ed736-kube-api-access-w9fx4\") pod \"ingress-canary-65ddv\" (UID: \"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736\") " pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881339 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd88959e-7f4f-437a-8084-58848727bfdb-certs\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881386 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-webhook-cert\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881415 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881522 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eacdeb1b-093e-4110-b6a6-b2605bf167ee-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881602 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29726719-a46b-4403-b241-5397d624f714-audit-dir\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881630 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtvxh\" (UniqueName: \"kubernetes.io/projected/db247a50-420f-448d-9385-057577216de5-kube-api-access-vtvxh\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881690 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881720 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881741 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8mmb\" (UniqueName: \"kubernetes.io/projected/395c4612-671d-4cd4-9770-4dd41789c3c1-kube-api-access-l8mmb\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881764 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941f9959-8397-4065-bbbf-79ef68aa8148-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881797 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w27qm\" (UniqueName: \"kubernetes.io/projected/3618463d-9c33-4b4c-980f-1a91fca41cbe-kube-api-access-w27qm\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881825 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-config\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881847 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db247a50-420f-448d-9385-057577216de5-signing-cabundle\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881873 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881926 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjkvp\" (UniqueName: \"kubernetes.io/projected/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-kube-api-access-jjkvp\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881951 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-apiservice-cert\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.881983 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dcf9ce-1671-4d92-9eab-60c4225eb208-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882027 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882050 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/334b6e1b-3607-49f1-9c30-3a22590c2bfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882077 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eacdeb1b-093e-4110-b6a6-b2605bf167ee-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882089 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c61b802-51ae-4857-b4ee-6845ea44d69e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fd6bm\" (UID: \"6c61b802-51ae-4857-b4ee-6845ea44d69e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882114 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec75b248-8586-4030-a552-86eba44b36fa-config-volume\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882140 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-mountpoint-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882167 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpdfz\" (UniqueName: \"kubernetes.io/projected/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-kube-api-access-dpdfz\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882195 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkjbt\" (UniqueName: \"kubernetes.io/projected/38e216fe-b3a6-4e07-a537-79f9711617b2-kube-api-access-nkjbt\") pod \"migrator-59844c95c7-t7whs\" (UID: \"38e216fe-b3a6-4e07-a537-79f9711617b2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882224 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882246 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882271 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4n8\" (UniqueName: \"kubernetes.io/projected/062a5bcb-3e66-473a-b594-5cf089a9fb73-kube-api-access-qb4n8\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882300 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b3825e-da05-40b6-ac8d-f469e634a019-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qckcf\" (UID: \"d0b3825e-da05-40b6-ac8d-f469e634a019\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882373 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-audit-policies\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882400 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-registration-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882428 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-trusted-ca\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882448 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd88959e-7f4f-437a-8084-58848727bfdb-node-bootstrap-token\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882466 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dcf9ce-1671-4d92-9eab-60c4225eb208-config\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882517 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-tls\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882548 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mdmg\" (UniqueName: \"kubernetes.io/projected/38941a1c-fbd3-4909-b5f3-2128059e9d95-kube-api-access-2mdmg\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882583 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882608 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbg8n\" (UniqueName: \"kubernetes.io/projected/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-kube-api-access-cbg8n\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882632 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882653 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxqrc\" (UniqueName: \"kubernetes.io/projected/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-kube-api-access-pxqrc\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882677 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882729 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-images\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882750 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/395c4612-671d-4cd4-9770-4dd41789c3c1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882771 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-serving-cert\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882799 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79smd\" (UniqueName: \"kubernetes.io/projected/0df7974d-d44f-4c17-b4a4-afcef9078807-kube-api-access-79smd\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882820 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db247a50-420f-448d-9385-057577216de5-signing-key\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882840 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-certificates\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882863 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882886 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-proxy-tls\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882911 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a9b369-2369-4fe3-9568-ada564d1c2a6-config-volume\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882932 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd6ca3ba-1c2b-43b4-b324-a197ad4ed736-cert\") pod \"ingress-canary-65ddv\" (UID: \"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736\") " pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882970 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-tmpfs\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.882988 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941f9959-8397-4065-bbbf-79ef68aa8148-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883013 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-service-ca-bundle\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883035 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af661467-8667-4b09-ae7a-9ad7c99a7de5-config\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883058 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334b6e1b-3607-49f1-9c30-3a22590c2bfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883083 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hvth\" (UniqueName: \"kubernetes.io/projected/7645629b-9409-48bb-94cc-a63e3bd1fe4b-kube-api-access-4hvth\") pod \"control-plane-machine-set-operator-78cbb6b69f-5sbb9\" (UID: \"7645629b-9409-48bb-94cc-a63e3bd1fe4b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883104 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28dcf9ce-1671-4d92-9eab-60c4225eb208-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883132 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883155 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883194 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af661467-8667-4b09-ae7a-9ad7c99a7de5-serving-cert\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883213 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q2z\" (UniqueName: \"kubernetes.io/projected/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-kube-api-access-b5q2z\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883254 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062a5bcb-3e66-473a-b594-5cf089a9fb73-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883273 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-bound-sa-token\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883292 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w8jj\" (UniqueName: \"kubernetes.io/projected/fc485407-1013-4868-83de-ec51c4cdb030-kube-api-access-4w8jj\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883317 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59f9ffa6-7e49-4182-a02c-9de8f1010928-serving-cert\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883338 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3618463d-9c33-4b4c-980f-1a91fca41cbe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883391 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vhk9\" (UniqueName: \"kubernetes.io/projected/6c61b802-51ae-4857-b4ee-6845ea44d69e-kube-api-access-2vhk9\") pod \"multus-admission-controller-857f4d67dd-fd6bm\" (UID: \"6c61b802-51ae-4857-b4ee-6845ea44d69e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883412 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mp7d\" (UniqueName: \"kubernetes.io/projected/af661467-8667-4b09-ae7a-9ad7c99a7de5-kube-api-access-8mp7d\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883446 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38941a1c-fbd3-4909-b5f3-2128059e9d95-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883473 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7645629b-9409-48bb-94cc-a63e3bd1fe4b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5sbb9\" (UID: \"7645629b-9409-48bb-94cc-a63e3bd1fe4b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.883494 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm4pr\" (UniqueName: \"kubernetes.io/projected/b9a9b369-2369-4fe3-9568-ada564d1c2a6-kube-api-access-fm4pr\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.884422 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/395c4612-671d-4cd4-9770-4dd41789c3c1-config\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.888306 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-certificates\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.888414 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.888839 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-metrics-certs\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.889143 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-config\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.890030 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/af661467-8667-4b09-ae7a-9ad7c99a7de5-trusted-ca\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.890094 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-audit-policies\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.890545 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-stats-auth\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.891512 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-service-ca-bundle\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.893195 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.898498 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59f9ffa6-7e49-4182-a02c-9de8f1010928-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.898838 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.899685 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.900063 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3618463d-9c33-4b4c-980f-1a91fca41cbe-serving-cert\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.900218 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af661467-8667-4b09-ae7a-9ad7c99a7de5-serving-cert\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.900367 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3618463d-9c33-4b4c-980f-1a91fca41cbe-available-featuregates\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.901125 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eacdeb1b-093e-4110-b6a6-b2605bf167ee-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.901455 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29726719-a46b-4403-b241-5397d624f714-audit-dir\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.901623 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-images\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.901910 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59f9ffa6-7e49-4182-a02c-9de8f1010928-serving-cert\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.902065 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.902060 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0df7974d-d44f-4c17-b4a4-afcef9078807-default-certificate\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.902141 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af661467-8667-4b09-ae7a-9ad7c99a7de5-config\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.903785 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.903876 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.904171 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.904979 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: E1126 13:25:57.905045 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.405025222 +0000 UTC m=+142.040850304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.905637 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-trusted-ca\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.905544 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.906473 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/6c61b802-51ae-4857-b4ee-6845ea44d69e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fd6bm\" (UID: \"6c61b802-51ae-4857-b4ee-6845ea44d69e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.907008 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-tls\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.907551 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df7974d-d44f-4c17-b4a4-afcef9078807-service-ca-bundle\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.907991 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.908054 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.911981 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.913133 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.913972 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" event={"ID":"7c5213f4-2ee5-4136-b62c-7b291044e467","Type":"ContainerStarted","Data":"27dce5d2b1550679f7a1beb323687e9653c83fda0e6000d4f7d08c94b4e8fc39"} Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.913993 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/395c4612-671d-4cd4-9770-4dd41789c3c1-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.914199 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-proxy-tls\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.919879 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.920468 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" event={"ID":"983891ce-aeec-413f-ab55-ac0789f59708","Type":"ContainerStarted","Data":"f496e5e51e63e19332a2309a7c0ffd7cc7fd106ccfa68088777760ef8c1f4369"} Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.921584 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" event={"ID":"2dc1db4a-8e2b-4e1f-909c-08e8ee336041","Type":"ContainerStarted","Data":"964d3e267a5d9b4af508d5ff00ef68bc029afc484d26d9118ab1b8ec4dd269b4"} Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.923213 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.924769 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vtv6\" (UniqueName: \"kubernetes.io/projected/29726719-a46b-4403-b241-5397d624f714-kube-api-access-2vtv6\") pod \"oauth-openshift-558db77b4-mxj48\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.938246 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-bound-sa-token\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.973962 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vhk9\" (UniqueName: \"kubernetes.io/projected/6c61b802-51ae-4857-b4ee-6845ea44d69e-kube-api-access-2vhk9\") pod \"multus-admission-controller-857f4d67dd-fd6bm\" (UID: \"6c61b802-51ae-4857-b4ee-6845ea44d69e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989717 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-serving-cert\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989765 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db247a50-420f-448d-9385-057577216de5-signing-key\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989783 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a9b369-2369-4fe3-9568-ada564d1c2a6-config-volume\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989806 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd6ca3ba-1c2b-43b4-b324-a197ad4ed736-cert\") pod \"ingress-canary-65ddv\" (UID: \"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736\") " pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989824 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941f9959-8397-4065-bbbf-79ef68aa8148-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989841 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-tmpfs\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989859 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hvth\" (UniqueName: \"kubernetes.io/projected/7645629b-9409-48bb-94cc-a63e3bd1fe4b-kube-api-access-4hvth\") pod \"control-plane-machine-set-operator-78cbb6b69f-5sbb9\" (UID: \"7645629b-9409-48bb-94cc-a63e3bd1fe4b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989876 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28dcf9ce-1671-4d92-9eab-60c4225eb208-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989892 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334b6e1b-3607-49f1-9c30-3a22590c2bfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989909 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989930 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5q2z\" (UniqueName: \"kubernetes.io/projected/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-kube-api-access-b5q2z\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989951 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062a5bcb-3e66-473a-b594-5cf089a9fb73-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989968 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w8jj\" (UniqueName: \"kubernetes.io/projected/fc485407-1013-4868-83de-ec51c4cdb030-kube-api-access-4w8jj\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.989988 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7645629b-9409-48bb-94cc-a63e3bd1fe4b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5sbb9\" (UID: \"7645629b-9409-48bb-94cc-a63e3bd1fe4b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990007 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm4pr\" (UniqueName: \"kubernetes.io/projected/b9a9b369-2369-4fe3-9568-ada564d1c2a6-kube-api-access-fm4pr\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990037 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38941a1c-fbd3-4909-b5f3-2128059e9d95-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990055 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990073 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbckd\" (UniqueName: \"kubernetes.io/projected/ec75b248-8586-4030-a552-86eba44b36fa-kube-api-access-hbckd\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990099 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-plugins-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990147 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z64kh\" (UniqueName: \"kubernetes.io/projected/d0b3825e-da05-40b6-ac8d-f469e634a019-kube-api-access-z64kh\") pod \"package-server-manager-789f6589d5-qckcf\" (UID: \"d0b3825e-da05-40b6-ac8d-f469e634a019\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990169 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38941a1c-fbd3-4909-b5f3-2128059e9d95-srv-cert\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990191 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf75w\" (UniqueName: \"kubernetes.io/projected/bd88959e-7f4f-437a-8084-58848727bfdb-kube-api-access-cf75w\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990216 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941f9959-8397-4065-bbbf-79ef68aa8148-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990235 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-socket-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990264 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec75b248-8586-4030-a552-86eba44b36fa-metrics-tls\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990282 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a9b369-2369-4fe3-9568-ada564d1c2a6-secret-volume\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990301 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp4nn\" (UniqueName: \"kubernetes.io/projected/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-kube-api-access-mp4nn\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990322 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-proxy-tls\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990368 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-csi-data-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990386 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334b6e1b-3607-49f1-9c30-3a22590c2bfa-config\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990410 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062a5bcb-3e66-473a-b594-5cf089a9fb73-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990428 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9fx4\" (UniqueName: \"kubernetes.io/projected/dd6ca3ba-1c2b-43b4-b324-a197ad4ed736-kube-api-access-w9fx4\") pod \"ingress-canary-65ddv\" (UID: \"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736\") " pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990447 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd88959e-7f4f-437a-8084-58848727bfdb-certs\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990463 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-webhook-cert\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990481 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990500 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtvxh\" (UniqueName: \"kubernetes.io/projected/db247a50-420f-448d-9385-057577216de5-kube-api-access-vtvxh\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990515 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941f9959-8397-4065-bbbf-79ef68aa8148-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990875 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-config\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990937 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db247a50-420f-448d-9385-057577216de5-signing-cabundle\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.990960 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-apiservice-cert\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991000 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dcf9ce-1671-4d92-9eab-60c4225eb208-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991024 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/334b6e1b-3607-49f1-9c30-3a22590c2bfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991065 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec75b248-8586-4030-a552-86eba44b36fa-config-volume\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991095 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-mountpoint-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991121 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpdfz\" (UniqueName: \"kubernetes.io/projected/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-kube-api-access-dpdfz\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991179 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkjbt\" (UniqueName: \"kubernetes.io/projected/38e216fe-b3a6-4e07-a537-79f9711617b2-kube-api-access-nkjbt\") pod \"migrator-59844c95c7-t7whs\" (UID: \"38e216fe-b3a6-4e07-a537-79f9711617b2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991204 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb4n8\" (UniqueName: \"kubernetes.io/projected/062a5bcb-3e66-473a-b594-5cf089a9fb73-kube-api-access-qb4n8\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991264 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b3825e-da05-40b6-ac8d-f469e634a019-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qckcf\" (UID: \"d0b3825e-da05-40b6-ac8d-f469e634a019\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991322 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991366 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd88959e-7f4f-437a-8084-58848727bfdb-node-bootstrap-token\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991390 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-registration-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991410 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dcf9ce-1671-4d92-9eab-60c4225eb208-config\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991619 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mdmg\" (UniqueName: \"kubernetes.io/projected/38941a1c-fbd3-4909-b5f3-2128059e9d95-kube-api-access-2mdmg\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.991656 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbg8n\" (UniqueName: \"kubernetes.io/projected/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-kube-api-access-cbg8n\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.992094 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-plugins-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.992449 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-config\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.993198 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/db247a50-420f-448d-9385-057577216de5-signing-cabundle\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.993458 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-csi-data-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.994170 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/334b6e1b-3607-49f1-9c30-3a22590c2bfa-config\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.995371 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062a5bcb-3e66-473a-b594-5cf089a9fb73-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:57 crc kubenswrapper[4695]: I1126 13:25:57.999159 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-socket-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.001839 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-mountpoint-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.003667 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-skqtf"] Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.003986 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.503968973 +0000 UTC m=+142.139794055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.006192 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-registration-dir\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.007016 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-proxy-tls\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.007874 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/334b6e1b-3607-49f1-9c30-3a22590c2bfa-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.010238 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7645629b-9409-48bb-94cc-a63e3bd1fe4b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-5sbb9\" (UID: \"7645629b-9409-48bb-94cc-a63e3bd1fe4b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.012735 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.014084 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/bd88959e-7f4f-437a-8084-58848727bfdb-node-bootstrap-token\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.014147 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/bd88959e-7f4f-437a-8084-58848727bfdb-certs\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.027446 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/941f9959-8397-4065-bbbf-79ef68aa8148-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.039530 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28dcf9ce-1671-4d92-9eab-60c4225eb208-config\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.040114 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a9b369-2369-4fe3-9568-ada564d1c2a6-config-volume\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.041677 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-tmpfs\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.041829 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-webhook-cert\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.041884 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-apiservice-cert\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.043471 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-serving-cert\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.043668 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/941f9959-8397-4065-bbbf-79ef68aa8148-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.043809 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/38941a1c-fbd3-4909-b5f3-2128059e9d95-srv-cert\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.051839 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.052419 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85kcg\" (UniqueName: \"kubernetes.io/projected/59f9ffa6-7e49-4182-a02c-9de8f1010928-kube-api-access-85kcg\") pod \"authentication-operator-69f744f599-g7rk6\" (UID: \"59f9ffa6-7e49-4182-a02c-9de8f1010928\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.052562 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/38941a1c-fbd3-4909-b5f3-2128059e9d95-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.052857 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dd6ca3ba-1c2b-43b4-b324-a197ad4ed736-cert\") pod \"ingress-canary-65ddv\" (UID: \"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736\") " pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.053065 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b3825e-da05-40b6-ac8d-f469e634a019-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-qckcf\" (UID: \"d0b3825e-da05-40b6-ac8d-f469e634a019\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:58 crc kubenswrapper[4695]: W1126 13:25:58.053165 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce25ffe5_b9fd_49f6_b465_54a2e2cc1441.slice/crio-c3f97acc114cb445080f659a891ceead3b11430c454629a5007244cced51b36d WatchSource:0}: Error finding container c3f97acc114cb445080f659a891ceead3b11430c454629a5007244cced51b36d: Status 404 returned error can't find the container with id c3f97acc114cb445080f659a891ceead3b11430c454629a5007244cced51b36d Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.053262 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a9b369-2369-4fe3-9568-ada564d1c2a6-secret-volume\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.053487 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28dcf9ce-1671-4d92-9eab-60c4225eb208-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.054720 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.057194 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ec75b248-8586-4030-a552-86eba44b36fa-config-volume\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.062244 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mp7d\" (UniqueName: \"kubernetes.io/projected/af661467-8667-4b09-ae7a-9ad7c99a7de5-kube-api-access-8mp7d\") pod \"console-operator-58897d9998-6w4lm\" (UID: \"af661467-8667-4b09-ae7a-9ad7c99a7de5\") " pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.064201 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/062a5bcb-3e66-473a-b594-5cf089a9fb73-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.072701 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8mmb\" (UniqueName: \"kubernetes.io/projected/395c4612-671d-4cd4-9770-4dd41789c3c1-kube-api-access-l8mmb\") pod \"openshift-apiserver-operator-796bbdcf4f-ndgfz\" (UID: \"395c4612-671d-4cd4-9770-4dd41789c3c1\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.073635 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-72w4t"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.084324 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/db247a50-420f-448d-9385-057577216de5-signing-key\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.084490 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.084534 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ec75b248-8586-4030-a552-86eba44b36fa-metrics-tls\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.089821 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.092864 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.093146 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.593108239 +0000 UTC m=+142.228933321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.093586 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.093961 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.593944667 +0000 UTC m=+142.229769749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.099989 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-jxwsj"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.109574 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.111814 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.112389 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdsjt\" (UniqueName: \"kubernetes.io/projected/eacdeb1b-093e-4110-b6a6-b2605bf167ee-kube-api-access-cdsjt\") pod \"openshift-controller-manager-operator-756b6f6bc6-lbn66\" (UID: \"eacdeb1b-093e-4110-b6a6-b2605bf167ee\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.114392 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nchxc\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-kube-api-access-nchxc\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.114635 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.116061 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.122847 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79smd\" (UniqueName: \"kubernetes.io/projected/0df7974d-d44f-4c17-b4a4-afcef9078807-kube-api-access-79smd\") pod \"router-default-5444994796-t8w9k\" (UID: \"0df7974d-d44f-4c17-b4a4-afcef9078807\") " pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.124880 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.140871 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w27qm\" (UniqueName: \"kubernetes.io/projected/3618463d-9c33-4b4c-980f-1a91fca41cbe-kube-api-access-w27qm\") pod \"openshift-config-operator-7777fb866f-lwlnn\" (UID: \"3618463d-9c33-4b4c-980f-1a91fca41cbe\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.168259 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxqrc\" (UniqueName: \"kubernetes.io/projected/1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f-kube-api-access-pxqrc\") pod \"machine-config-operator-74547568cd-4s6xh\" (UID: \"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.169119 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4jcnm"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.175248 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.198796 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjkvp\" (UniqueName: \"kubernetes.io/projected/60d6dff8-6ffc-45f2-be62-83ca50e7ee65-kube-api-access-jjkvp\") pod \"cluster-image-registry-operator-dc59b4c8b-x8px5\" (UID: \"60d6dff8-6ffc-45f2-be62-83ca50e7ee65\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.216751 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.217154 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.717117547 +0000 UTC m=+142.352942629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.217488 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.217894 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.717886501 +0000 UTC m=+142.353711583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.235458 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbg8n\" (UniqueName: \"kubernetes.io/projected/a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5-kube-api-access-cbg8n\") pod \"csi-hostpathplugin-5ddpz\" (UID: \"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5\") " pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.238224 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp4nn\" (UniqueName: \"kubernetes.io/projected/ff48eb97-fabb-4faa-b4dc-25aa1897d5dd-kube-api-access-mp4nn\") pod \"machine-config-controller-84d6567774-fjvrv\" (UID: \"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.243261 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.258048 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.290214 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbckd\" (UniqueName: \"kubernetes.io/projected/ec75b248-8586-4030-a552-86eba44b36fa-kube-api-access-hbckd\") pod \"dns-default-4j967\" (UID: \"ec75b248-8586-4030-a552-86eba44b36fa\") " pod="openshift-dns/dns-default-4j967" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.302996 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-x97bq"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.303223 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.307074 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb4n8\" (UniqueName: \"kubernetes.io/projected/062a5bcb-3e66-473a-b594-5cf089a9fb73-kube-api-access-qb4n8\") pod \"kube-storage-version-migrator-operator-b67b599dd-bcgtq\" (UID: \"062a5bcb-3e66-473a-b594-5cf089a9fb73\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.310151 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z64kh\" (UniqueName: \"kubernetes.io/projected/d0b3825e-da05-40b6-ac8d-f469e634a019-kube-api-access-z64kh\") pod \"package-server-manager-789f6589d5-qckcf\" (UID: \"d0b3825e-da05-40b6-ac8d-f469e634a019\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.310174 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4fqcc"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.313653 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.318178 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.318665 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.818644171 +0000 UTC m=+142.454469243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.320018 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf75w\" (UniqueName: \"kubernetes.io/projected/bd88959e-7f4f-437a-8084-58848727bfdb-kube-api-access-cf75w\") pod \"machine-config-server-gtfpw\" (UID: \"bd88959e-7f4f-437a-8084-58848727bfdb\") " pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:58 crc kubenswrapper[4695]: W1126 13:25:58.322471 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11a6c05a_5247_4b9c_9fb7_3061240e6953.slice/crio-f0799431d604aaf184847b0f07184761464388910bda2ad130eeb169c0fed1be WatchSource:0}: Error finding container f0799431d604aaf184847b0f07184761464388910bda2ad130eeb169c0fed1be: Status 404 returned error can't find the container with id f0799431d604aaf184847b0f07184761464388910bda2ad130eeb169c0fed1be Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.323522 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.330934 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/941f9959-8397-4065-bbbf-79ef68aa8148-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-64nqf\" (UID: \"941f9959-8397-4065-bbbf-79ef68aa8148\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.341403 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.355827 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5q2z\" (UniqueName: \"kubernetes.io/projected/fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd-kube-api-access-b5q2z\") pod \"service-ca-operator-777779d784-lg6fv\" (UID: \"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.364006 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mxj48"] Nov 26 13:25:58 crc kubenswrapper[4695]: W1126 13:25:58.380986 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd84d0827_d7fe_42eb_adbe_eda35247c26c.slice/crio-cc898b040f3ee7eac3ea023613ca8cff77844e181b83e735728ea39810c34273 WatchSource:0}: Error finding container cc898b040f3ee7eac3ea023613ca8cff77844e181b83e735728ea39810c34273: Status 404 returned error can't find the container with id cc898b040f3ee7eac3ea023613ca8cff77844e181b83e735728ea39810c34273 Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.381333 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/334b6e1b-3607-49f1-9c30-3a22590c2bfa-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-rbvpt\" (UID: \"334b6e1b-3607-49f1-9c30-3a22590c2bfa\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.385810 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.399118 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpdfz\" (UniqueName: \"kubernetes.io/projected/b7aebb0b-9f0d-43dc-ac9e-c275e9406825-kube-api-access-dpdfz\") pod \"packageserver-d55dfcdfc-dcdr7\" (UID: \"b7aebb0b-9f0d-43dc-ac9e-c275e9406825\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.419460 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.420986 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:58.920965791 +0000 UTC m=+142.556790873 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.424442 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w8jj\" (UniqueName: \"kubernetes.io/projected/fc485407-1013-4868-83de-ec51c4cdb030-kube-api-access-4w8jj\") pod \"marketplace-operator-79b997595-dzxm6\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.432158 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.447263 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkjbt\" (UniqueName: \"kubernetes.io/projected/38e216fe-b3a6-4e07-a537-79f9711617b2-kube-api-access-nkjbt\") pod \"migrator-59844c95c7-t7whs\" (UID: \"38e216fe-b3a6-4e07-a537-79f9711617b2\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.465040 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9fx4\" (UniqueName: \"kubernetes.io/projected/dd6ca3ba-1c2b-43b4-b324-a197ad4ed736-kube-api-access-w9fx4\") pod \"ingress-canary-65ddv\" (UID: \"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736\") " pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.478974 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hvth\" (UniqueName: \"kubernetes.io/projected/7645629b-9409-48bb-94cc-a63e3bd1fe4b-kube-api-access-4hvth\") pod \"control-plane-machine-set-operator-78cbb6b69f-5sbb9\" (UID: \"7645629b-9409-48bb-94cc-a63e3bd1fe4b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.494251 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.500685 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm4pr\" (UniqueName: \"kubernetes.io/projected/b9a9b369-2369-4fe3-9568-ada564d1c2a6-kube-api-access-fm4pr\") pod \"collect-profiles-29402715-4w2c8\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.501329 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.508915 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.515541 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mdmg\" (UniqueName: \"kubernetes.io/projected/38941a1c-fbd3-4909-b5f3-2128059e9d95-kube-api-access-2mdmg\") pod \"catalog-operator-68c6474976-r4fqh\" (UID: \"38941a1c-fbd3-4909-b5f3-2128059e9d95\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.520174 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.520717 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.520870 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.020845923 +0000 UTC m=+142.656671005 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.521254 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.521562 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.021555236 +0000 UTC m=+142.657380318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.529117 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.538365 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.538866 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/28dcf9ce-1671-4d92-9eab-60c4225eb208-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-46j67\" (UID: \"28dcf9ce-1671-4d92-9eab-60c4225eb208\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.553065 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.555175 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtvxh\" (UniqueName: \"kubernetes.io/projected/db247a50-420f-448d-9385-057577216de5-kube-api-access-vtvxh\") pod \"service-ca-9c57cc56f-nch9b\" (UID: \"db247a50-420f-448d-9385-057577216de5\") " pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.556335 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.559428 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.567160 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.573545 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.581930 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-65ddv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.591396 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-4j967" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.613748 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-gtfpw" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.622690 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.623058 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.123038429 +0000 UTC m=+142.758863511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.650676 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6w4lm"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.670479 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fd6bm"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.724472 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.725589 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.225572566 +0000 UTC m=+142.861397648 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.745912 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.755659 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.794060 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.826366 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.826829 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.326810091 +0000 UTC m=+142.962635173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.844616 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.931288 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:58 crc kubenswrapper[4695]: E1126 13:25:58.933866 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.433831762 +0000 UTC m=+143.069656834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.938445 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" event={"ID":"983891ce-aeec-413f-ab55-ac0789f59708","Type":"ContainerStarted","Data":"92e2a5b779adbf7519f8fd6afcf0efc40e25a3a3098add13f17f426e748bcdba"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.938494 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" event={"ID":"983891ce-aeec-413f-ab55-ac0789f59708","Type":"ContainerStarted","Data":"2fac1da811c80c710fd68a39964673734792dfece5594ceb79208592d26998af"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.940366 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" event={"ID":"af661467-8667-4b09-ae7a-9ad7c99a7de5","Type":"ContainerStarted","Data":"45d266f67abcd197c8b738763f984ad4fd3fa3327c827179d7d30ff6340b087b"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.948259 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" event={"ID":"19892b0f-6ec5-4376-a584-c23b3938ed82","Type":"ContainerStarted","Data":"d82175cb55329e35836ba1e8677d2c4fb7c89e30482fdf85731e262fc93e71f0"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.951375 4695 generic.go:334] "Generic (PLEG): container finished" podID="2dc1db4a-8e2b-4e1f-909c-08e8ee336041" containerID="231799b83fa5fd80c148e3f92871fd25d1e6ccda7abf03675067ae1178729eca" exitCode=0 Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.951430 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" event={"ID":"2dc1db4a-8e2b-4e1f-909c-08e8ee336041","Type":"ContainerDied","Data":"231799b83fa5fd80c148e3f92871fd25d1e6ccda7abf03675067ae1178729eca"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.955745 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" event={"ID":"b73de046-eed4-42dc-99ac-37febbf86b98","Type":"ContainerStarted","Data":"57877a62be47da3bacb47517990a09993ac551e3458be92ccacc42db37222460"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.955774 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" event={"ID":"b73de046-eed4-42dc-99ac-37febbf86b98","Type":"ContainerStarted","Data":"557a0a061e57e6e984b5636e878a5fa869538ec7989e360fa40bbe9d5df96a74"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.956727 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" event={"ID":"29726719-a46b-4403-b241-5397d624f714","Type":"ContainerStarted","Data":"388fa39dafb3321e586dc50bfc45c54de50a7b6f38abaa3eaae25a44e014931e"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.957721 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4fqcc" event={"ID":"d84d0827-d7fe-42eb-adbe-eda35247c26c","Type":"ContainerStarted","Data":"cc898b040f3ee7eac3ea023613ca8cff77844e181b83e735728ea39810c34273"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.958643 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" event={"ID":"7c5213f4-2ee5-4136-b62c-7b291044e467","Type":"ContainerStarted","Data":"68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.959869 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.961118 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" event={"ID":"11a6c05a-5247-4b9c-9fb7-3061240e6953","Type":"ContainerStarted","Data":"f0799431d604aaf184847b0f07184761464388910bda2ad130eeb169c0fed1be"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.961231 4695 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5mdsv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.961276 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" podUID="7c5213f4-2ee5-4136-b62c-7b291044e467" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.961947 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" event={"ID":"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85","Type":"ContainerStarted","Data":"8ffb3321c99f1798bfd9a44f638bd2341c47112b6979cdf8fd7571b9d9b77247"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.965359 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t8w9k" event={"ID":"0df7974d-d44f-4c17-b4a4-afcef9078807","Type":"ContainerStarted","Data":"3f52a090431e78a9dec80cae58d98013a372831d1360e9a52110b18d0c4ec715"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.981033 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh"] Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.981506 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" event={"ID":"2aacf7d4-37f8-467c-bc7d-dc773cab58d3","Type":"ContainerStarted","Data":"7a8c8499134a9bb7656ac03e3a8cc7e7349a58f8bd01bb746a3d18a3a7b3442a"} Nov 26 13:25:58 crc kubenswrapper[4695]: I1126 13:25:58.986235 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" event={"ID":"395c4612-671d-4cd4-9770-4dd41789c3c1","Type":"ContainerStarted","Data":"6329fc585e60fb7524bc8515874d0316276e31645ba325acccd32335760f8c5d"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.000031 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" event={"ID":"7f633ea2-f78f-4a36-8f28-13c2f053c349","Type":"ContainerStarted","Data":"e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.000082 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" event={"ID":"7f633ea2-f78f-4a36-8f28-13c2f053c349","Type":"ContainerStarted","Data":"d45de231aba42616981b31ab568d44cee447efdb0daf269f8a8f251032d6861e"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.001946 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.003379 4695 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8q8mp container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.004175 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" podUID="7f633ea2-f78f-4a36-8f28-13c2f053c349" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.004943 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" event={"ID":"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441","Type":"ContainerStarted","Data":"c3f97acc114cb445080f659a891ceead3b11430c454629a5007244cced51b36d"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.005565 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x97bq" event={"ID":"6b0b4bb2-6319-4f1b-ba1c-80256970147d","Type":"ContainerStarted","Data":"340c6d44b21aa6862a6ee2f0e34dfb2c4a8599be09e5760742f67ee1402b7109"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.007147 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" event={"ID":"6c61b802-51ae-4857-b4ee-6845ea44d69e","Type":"ContainerStarted","Data":"040cb88916e32722ba0624a0eed0f428f3bd402115d1447d0efb48ae94cc2b6c"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.025852 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" event={"ID":"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca","Type":"ContainerStarted","Data":"e5ff53f92c0407abe581cbdb078838bc79d0a143bcc90b2a624086b7aceabfda"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.025914 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" event={"ID":"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca","Type":"ContainerStarted","Data":"a668a4f976482434e34b306fa4edb375f43c99559ff0c9e5bd8334a3b7dd7a38"} Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.033270 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.035315 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.040440 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.540413049 +0000 UTC m=+143.176238131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.053950 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5ddpz"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.066605 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.109220 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.141306 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.142431 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.642418569 +0000 UTC m=+143.278243651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.183050 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-g7rk6"] Nov 26 13:25:59 crc kubenswrapper[4695]: W1126 13:25:59.196549 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a4dc693_724a_4ddd_a4c4_8d213f4c1a1f.slice/crio-c6ef0c177511a8e9e78d39e50f1b2339e3d0589886d9563729b573c79e4a4ddf WatchSource:0}: Error finding container c6ef0c177511a8e9e78d39e50f1b2339e3d0589886d9563729b573c79e4a4ddf: Status 404 returned error can't find the container with id c6ef0c177511a8e9e78d39e50f1b2339e3d0589886d9563729b573c79e4a4ddf Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.232659 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.242207 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.242532 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.742515407 +0000 UTC m=+143.378340489 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.293296 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.295374 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.343547 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.343948 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.843930838 +0000 UTC m=+143.479755920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.444889 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.445606 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:25:59.945587897 +0000 UTC m=+143.581412979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.462858 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq"] Nov 26 13:25:59 crc kubenswrapper[4695]: W1126 13:25:59.485337 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7645629b_9409_48bb_94cc_a63e3bd1fe4b.slice/crio-e201fe8171f950fc50ff272f990861b2d7442361b94c43ad37cb6dfca57ae42c WatchSource:0}: Error finding container e201fe8171f950fc50ff272f990861b2d7442361b94c43ad37cb6dfca57ae42c: Status 404 returned error can't find the container with id e201fe8171f950fc50ff272f990861b2d7442361b94c43ad37cb6dfca57ae42c Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.547112 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.547575 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.047531745 +0000 UTC m=+143.683356827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.633539 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" podStartSLOduration=122.633493869 podStartE2EDuration="2m2.633493869s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:59.589845675 +0000 UTC m=+143.225670777" watchObservedRunningTime="2025-11-26 13:25:59.633493869 +0000 UTC m=+143.269318951" Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.649741 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.649950 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.149920157 +0000 UTC m=+143.785745239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.650223 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.650589 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.150580009 +0000 UTC m=+143.786405281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: W1126 13:25:59.651015 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062a5bcb_3e66_473a_b594_5cf089a9fb73.slice/crio-136a48efdee9c63b20c4164ec0d3d6372228fc06fd4db1681cfa7de71e3d1e4f WatchSource:0}: Error finding container 136a48efdee9c63b20c4164ec0d3d6372228fc06fd4db1681cfa7de71e3d1e4f: Status 404 returned error can't find the container with id 136a48efdee9c63b20c4164ec0d3d6372228fc06fd4db1681cfa7de71e3d1e4f Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.671177 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" podStartSLOduration=122.67115468 podStartE2EDuration="2m2.67115468s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:25:59.667666947 +0000 UTC m=+143.303492019" watchObservedRunningTime="2025-11-26 13:25:59.67115468 +0000 UTC m=+143.306979762" Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.753261 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.761700 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.2616514 +0000 UTC m=+143.897476482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.863053 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.863974 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.363961719 +0000 UTC m=+143.999786801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.917748 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzxm6"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.940452 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-4j967"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.954399 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.965285 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.965571 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.465534465 +0000 UTC m=+144.101359547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.965897 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:25:59 crc kubenswrapper[4695]: E1126 13:25:59.966271 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.466264659 +0000 UTC m=+144.102089741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.980866 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.983468 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf"] Nov 26 13:25:59 crc kubenswrapper[4695]: I1126 13:25:59.986737 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf"] Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.048288 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" event={"ID":"7645629b-9409-48bb-94cc-a63e3bd1fe4b","Type":"ContainerStarted","Data":"e201fe8171f950fc50ff272f990861b2d7442361b94c43ad37cb6dfca57ae42c"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.069431 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.069854 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.569835289 +0000 UTC m=+144.205660361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.077230 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" event={"ID":"11a6c05a-5247-4b9c-9fb7-3061240e6953","Type":"ContainerStarted","Data":"6477b12c33f1cebf642c4e7c2e83991648d478bf8d3b39b464b7f186553d85fd"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.078444 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:26:00 crc kubenswrapper[4695]: W1126 13:26:00.078514 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec75b248_8586_4030_a552_86eba44b36fa.slice/crio-60ec86613ec6942f922e9584591f71fa55156ea89910108e13c0dccb3a69a69a WatchSource:0}: Error finding container 60ec86613ec6942f922e9584591f71fa55156ea89910108e13c0dccb3a69a69a: Status 404 returned error can't find the container with id 60ec86613ec6942f922e9584591f71fa55156ea89910108e13c0dccb3a69a69a Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.082171 4695 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hrqkb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.082206 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" podUID="11a6c05a-5247-4b9c-9fb7-3061240e6953" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.087481 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" event={"ID":"062a5bcb-3e66-473a-b594-5cf089a9fb73","Type":"ContainerStarted","Data":"136a48efdee9c63b20c4164ec0d3d6372228fc06fd4db1681cfa7de71e3d1e4f"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.109575 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv"] Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.111817 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" event={"ID":"af661467-8667-4b09-ae7a-9ad7c99a7de5","Type":"ContainerStarted","Data":"15d3b34b1b211d105f2af2dc6b8e75583c09d3fd9f741f4b4dab2b4361de0f69"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.112400 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.126081 4695 patch_prober.go:28] interesting pod/console-operator-58897d9998-6w4lm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.126158 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" podUID="af661467-8667-4b09-ae7a-9ad7c99a7de5" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/readyz\": dial tcp 10.217.0.31:8443: connect: connection refused" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.135433 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" event={"ID":"19892b0f-6ec5-4376-a584-c23b3938ed82","Type":"ContainerStarted","Data":"78bb4918f8e28008c40674eac28015da4359188c1067a76ae7ce91a29fc28378"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.165372 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67"] Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.172977 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.179998 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.673794102 +0000 UTC m=+144.309619184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.217063 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-t8w9k" event={"ID":"0df7974d-d44f-4c17-b4a4-afcef9078807","Type":"ContainerStarted","Data":"f6e4d3dca816bc9bbdab20051d133cc1b5719ef481fc1e4ad18411a78b7b88c1"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.227470 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gtfpw" event={"ID":"bd88959e-7f4f-437a-8084-58848727bfdb","Type":"ContainerStarted","Data":"ec632029b6f258bd5e0f0a1d472f8dd3dedc8b6e9280f390e15860d946dbc790"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.233172 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs"] Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.250161 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-x97bq" event={"ID":"6b0b4bb2-6319-4f1b-ba1c-80256970147d","Type":"ContainerStarted","Data":"c14b5cbafdcdc1a8bed8f3b96035203ebf9209f2115e2192c962e63d3be5a2cf"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.252800 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-x97bq" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.264217 4695 patch_prober.go:28] interesting pod/downloads-7954f5f757-x97bq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.264317 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x97bq" podUID="6b0b4bb2-6319-4f1b-ba1c-80256970147d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.274118 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4397fbb-62ef-4e2f-9ace-6c76a6e49f85" containerID="d5552885a5ad39110ba66663c2e9524ea621ee3aeaba5270359292b0bc259e6a" exitCode=0 Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.274219 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" event={"ID":"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85","Type":"ContainerDied","Data":"d5552885a5ad39110ba66663c2e9524ea621ee3aeaba5270359292b0bc259e6a"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.274663 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.274925 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.774893883 +0000 UTC m=+144.410718975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.275167 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.289617 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.789591555 +0000 UTC m=+144.425416637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.293282 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-65ddv"] Nov 26 13:26:00 crc kubenswrapper[4695]: W1126 13:26:00.308726 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28dcf9ce_1671_4d92_9eab_60c4225eb208.slice/crio-0167efa3c39ac0446269715e020d657340a059ed62903f439c16ede24f48640a WatchSource:0}: Error finding container 0167efa3c39ac0446269715e020d657340a059ed62903f439c16ede24f48640a: Status 404 returned error can't find the container with id 0167efa3c39ac0446269715e020d657340a059ed62903f439c16ede24f48640a Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.309019 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh"] Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.309047 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" event={"ID":"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f","Type":"ContainerStarted","Data":"eaac3268e3493710151fddb916a2082df421b5ff74b54527e57041bba2f00226"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.309064 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" event={"ID":"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f","Type":"ContainerStarted","Data":"c6ef0c177511a8e9e78d39e50f1b2339e3d0589886d9563729b573c79e4a4ddf"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.338223 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" event={"ID":"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441","Type":"ContainerStarted","Data":"c9fd16c51035e8132f04993f32ae9b46ed8875ff1203f1ddaebbd2d3db4c2b8a"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.338662 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-nch9b"] Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.344123 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" event={"ID":"60d6dff8-6ffc-45f2-be62-83ca50e7ee65","Type":"ContainerStarted","Data":"8b13f3083943728b5646166759a833943ede616c45d31fb0afd77e438a2dfb51"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.351724 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-sjzmc" podStartSLOduration=124.35171351299999 podStartE2EDuration="2m4.351713513s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.350765572 +0000 UTC m=+143.986590654" watchObservedRunningTime="2025-11-26 13:26:00.351713513 +0000 UTC m=+143.987538595" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.371373 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" event={"ID":"eacdeb1b-093e-4110-b6a6-b2605bf167ee","Type":"ContainerStarted","Data":"5e28ca3b74e5f039c00036f1754f34a45be7903b56df028b4b549abe83a6f332"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.390772 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.391714 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.891691638 +0000 UTC m=+144.527516720 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.396017 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" podStartSLOduration=123.395999706 podStartE2EDuration="2m3.395999706s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.392913647 +0000 UTC m=+144.028738729" watchObservedRunningTime="2025-11-26 13:26:00.395999706 +0000 UTC m=+144.031824788" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.399624 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" event={"ID":"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5","Type":"ContainerStarted","Data":"98b2e886ac28d0ab31e5a8e4405239b9e0c1a5dfceec44164c3fdc2c0dcfb4a9"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.450414 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" event={"ID":"334b6e1b-3607-49f1-9c30-3a22590c2bfa","Type":"ContainerStarted","Data":"7c833a5d9ada198a0d4ebe1d1d201422d5a71b4cce1d04e7a0d58c296e04a916"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.459416 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4jcnm" podStartSLOduration=123.459386995 podStartE2EDuration="2m3.459386995s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.457869286 +0000 UTC m=+144.093694388" watchObservedRunningTime="2025-11-26 13:26:00.459386995 +0000 UTC m=+144.095212077" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.465456 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" event={"ID":"b73de046-eed4-42dc-99ac-37febbf86b98","Type":"ContainerStarted","Data":"92270025ce1cb7ade181d132414f042218f8a115874393bf6fe6c213bb3bcacd"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.481166 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" event={"ID":"29726719-a46b-4403-b241-5397d624f714","Type":"ContainerStarted","Data":"6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.481215 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.491278 4695 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-mxj48 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" start-of-body= Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.491333 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" podUID="29726719-a46b-4403-b241-5397d624f714" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.12:6443/healthz\": dial tcp 10.217.0.12:6443: connect: connection refused" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.492089 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.492673 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4fqcc" event={"ID":"d84d0827-d7fe-42eb-adbe-eda35247c26c","Type":"ContainerStarted","Data":"472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029"} Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.499695 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:00.99967675 +0000 UTC m=+144.635501832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.504409 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" podStartSLOduration=124.504380992 podStartE2EDuration="2m4.504380992s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.493754249 +0000 UTC m=+144.129579331" watchObservedRunningTime="2025-11-26 13:26:00.504380992 +0000 UTC m=+144.140206074" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.508110 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" event={"ID":"b9a9b369-2369-4fe3-9568-ada564d1c2a6","Type":"ContainerStarted","Data":"fec47fb54d77e787792bdf5cae0f380336bc52d99e7f24ba337ea66adfccc5bc"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.518379 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-t8w9k" podStartSLOduration=123.518357771 podStartE2EDuration="2m3.518357771s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.517568656 +0000 UTC m=+144.153393738" watchObservedRunningTime="2025-11-26 13:26:00.518357771 +0000 UTC m=+144.154182853" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.533721 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" event={"ID":"59f9ffa6-7e49-4182-a02c-9de8f1010928","Type":"ContainerStarted","Data":"41a5673bcaee1af3ff2d8a1bb37e3aa94c60e54b5a9a5f2be5bf0d4ef34b9875"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.546792 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" event={"ID":"6f93277a-4f74-4839-8b28-2ff1bfd6f7ca","Type":"ContainerStarted","Data":"a6b7fa3b0c1928f6f661dfa7996946606eaba09d136899bf5dd7eef82df8defe"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.559845 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" event={"ID":"2aacf7d4-37f8-467c-bc7d-dc773cab58d3","Type":"ContainerStarted","Data":"f8e9a37ba0e667919e296f16019ca0dd132ce024a33321beceaa8b9a67db0670"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.566534 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" event={"ID":"395c4612-671d-4cd4-9770-4dd41789c3c1","Type":"ContainerStarted","Data":"5ecd48e456460e9717ecfd927ca1d51419fbd148494f90d7eecac56d494e933f"} Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.593567 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.593694 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.093671763 +0000 UTC m=+144.729496835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.594177 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.618658 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-x97bq" podStartSLOduration=124.618628815 podStartE2EDuration="2m4.618628815s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.596890767 +0000 UTC m=+144.232715849" watchObservedRunningTime="2025-11-26 13:26:00.618628815 +0000 UTC m=+144.254453897" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.629106 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" event={"ID":"3618463d-9c33-4b4c-980f-1a91fca41cbe","Type":"ContainerStarted","Data":"5d30a31bb7309dca20496418184fcc59d167824a4ccf48ab1d4c352017e2ee7b"} Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.636046 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.135996494 +0000 UTC m=+144.771821576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.660554 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.662987 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" podStartSLOduration=123.662968271 podStartE2EDuration="2m3.662968271s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.658485467 +0000 UTC m=+144.294310549" watchObservedRunningTime="2025-11-26 13:26:00.662968271 +0000 UTC m=+144.298793353" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.698407 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.699794 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.700293 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.20026975 +0000 UTC m=+144.836094832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.767380 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" podStartSLOduration=124.767336577 podStartE2EDuration="2m4.767336577s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.766102987 +0000 UTC m=+144.401928069" watchObservedRunningTime="2025-11-26 13:26:00.767336577 +0000 UTC m=+144.403161659" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.768618 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-zfxpp" podStartSLOduration=123.768612627 podStartE2EDuration="2m3.768612627s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.73260554 +0000 UTC m=+144.368430622" watchObservedRunningTime="2025-11-26 13:26:00.768612627 +0000 UTC m=+144.404437709" Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.799671 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.800153 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.300125431 +0000 UTC m=+144.935950513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:00 crc kubenswrapper[4695]: I1126 13:26:00.903069 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:00 crc kubenswrapper[4695]: E1126 13:26:00.903933 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.403910128 +0000 UTC m=+145.039735210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.009179 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.009656 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.509638338 +0000 UTC m=+145.145463420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.010213 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-ndgfz" podStartSLOduration=124.010188986 podStartE2EDuration="2m4.010188986s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:00.950789735 +0000 UTC m=+144.586614817" watchObservedRunningTime="2025-11-26 13:26:01.010188986 +0000 UTC m=+144.646014058" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.091162 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" podStartSLOduration=124.091143268 podStartE2EDuration="2m4.091143268s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.090251439 +0000 UTC m=+144.726076521" watchObservedRunningTime="2025-11-26 13:26:01.091143268 +0000 UTC m=+144.726968350" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.093198 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-72w4t" podStartSLOduration=124.093186684 podStartE2EDuration="2m4.093186684s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.025726035 +0000 UTC m=+144.661551117" watchObservedRunningTime="2025-11-26 13:26:01.093186684 +0000 UTC m=+144.729011776" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.116995 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.117429 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.617406873 +0000 UTC m=+145.253231955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.127204 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.146657 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:01 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:01 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:01 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.146773 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.205331 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" podStartSLOduration=125.205311919 podStartE2EDuration="2m5.205311919s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.20438633 +0000 UTC m=+144.840211412" watchObservedRunningTime="2025-11-26 13:26:01.205311919 +0000 UTC m=+144.841137001" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.205441 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" podStartSLOduration=124.205437273 podStartE2EDuration="2m4.205437273s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.149378911 +0000 UTC m=+144.785203983" watchObservedRunningTime="2025-11-26 13:26:01.205437273 +0000 UTC m=+144.841262355" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.220151 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.220528 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.720517199 +0000 UTC m=+145.356342271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.245709 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4fqcc" podStartSLOduration=125.245678278 podStartE2EDuration="2m5.245678278s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.23922518 +0000 UTC m=+144.875050272" watchObservedRunningTime="2025-11-26 13:26:01.245678278 +0000 UTC m=+144.881503360" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.327057 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.327500 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.827475898 +0000 UTC m=+145.463300970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.331045 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" podStartSLOduration=125.331012861 podStartE2EDuration="2m5.331012861s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.281870881 +0000 UTC m=+144.917695963" watchObservedRunningTime="2025-11-26 13:26:01.331012861 +0000 UTC m=+144.966837943" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.432062 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.432977 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:01.932961749 +0000 UTC m=+145.568786831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.535646 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.536033 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.036006122 +0000 UTC m=+145.671831204 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.635363 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-gtfpw" event={"ID":"bd88959e-7f4f-437a-8084-58848727bfdb","Type":"ContainerStarted","Data":"af879df4d900c37d18ca5c3c38f421bc290da506e7092b6f8c5dda97e66a7b5a"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.637313 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.638215 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.138197849 +0000 UTC m=+145.774022931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.642419 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-g7rk6" event={"ID":"59f9ffa6-7e49-4182-a02c-9de8f1010928","Type":"ContainerStarted","Data":"8b61b28e4658cdae633254285c3e2d0fa5613cc40cf86c783c6fbdf77cdddd90"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.646181 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" event={"ID":"db247a50-420f-448d-9385-057577216de5","Type":"ContainerStarted","Data":"9c9823f7973ff91dda7381ab5b1902099f729a6b812eadcc6cf130eb304968b0"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.646247 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" event={"ID":"db247a50-420f-448d-9385-057577216de5","Type":"ContainerStarted","Data":"fce6534caa377f6e36b1cbc0871febdc0fd4bdd8a4d1c419bc48401ec52cdc38"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.658419 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" event={"ID":"2dc1db4a-8e2b-4e1f-909c-08e8ee336041","Type":"ContainerStarted","Data":"a4f9596d253fc658b6ab89670719b52fb74f226cbf3ef9c64d0525bdaca24483"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.665427 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" event={"ID":"d0b3825e-da05-40b6-ac8d-f469e634a019","Type":"ContainerStarted","Data":"1b3f7144b0451c9154ebbdf7649595f97f63a56eee9d31772f613980adcbdf13"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.665501 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" event={"ID":"d0b3825e-da05-40b6-ac8d-f469e634a019","Type":"ContainerStarted","Data":"bfee2876eb26e8e22dec7eb76ec36ddade81eca779a139abb6dc08d0b6d07d20"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.665514 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" event={"ID":"d0b3825e-da05-40b6-ac8d-f469e634a019","Type":"ContainerStarted","Data":"edc60a61dd707e68aa70fdfb1b78c896a4f068dfd1b6687d6d5414e91ffbf847"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.665553 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.667914 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" event={"ID":"7645629b-9409-48bb-94cc-a63e3bd1fe4b","Type":"ContainerStarted","Data":"9f2f0db8d22e4474d99ecfb5e9a647d381da58a5e682a6c7a98a073a171fcfe3"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.709039 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-9tvcn" event={"ID":"2aacf7d4-37f8-467c-bc7d-dc773cab58d3","Type":"ContainerStarted","Data":"4d28da5df73367ea08defb1cec0939eaf32a3d2e817b0f67499db2c5aa83a0db"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.711035 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-gtfpw" podStartSLOduration=6.71100972 podStartE2EDuration="6.71100972s" podCreationTimestamp="2025-11-26 13:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.704892733 +0000 UTC m=+145.340717825" watchObservedRunningTime="2025-11-26 13:26:01.71100972 +0000 UTC m=+145.346834802" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.736913 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" event={"ID":"28dcf9ce-1671-4d92-9eab-60c4225eb208","Type":"ContainerStarted","Data":"0167efa3c39ac0446269715e020d657340a059ed62903f439c16ede24f48640a"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.747142 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.749593 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.24957126 +0000 UTC m=+145.885396332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.752986 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" podStartSLOduration=124.752949049 podStartE2EDuration="2m4.752949049s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.750646054 +0000 UTC m=+145.386471146" watchObservedRunningTime="2025-11-26 13:26:01.752949049 +0000 UTC m=+145.388774321" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.800440 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" event={"ID":"ce25ffe5-b9fd-49f6-b465-54a2e2cc1441","Type":"ContainerStarted","Data":"ccd41425629f944eb56a0a619bfe0c8a18124e411637c1027482fea83b245152"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.844118 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" podStartSLOduration=124.844093109 podStartE2EDuration="2m4.844093109s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.830684148 +0000 UTC m=+145.466509230" watchObservedRunningTime="2025-11-26 13:26:01.844093109 +0000 UTC m=+145.479918191" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.851628 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.854448 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.35440769 +0000 UTC m=+145.990232942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.864622 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-nch9b" podStartSLOduration=124.864597998 podStartE2EDuration="2m4.864597998s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.85374781 +0000 UTC m=+145.489572892" watchObservedRunningTime="2025-11-26 13:26:01.864597998 +0000 UTC m=+145.500423080" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.871959 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-x8px5" event={"ID":"60d6dff8-6ffc-45f2-be62-83ca50e7ee65","Type":"ContainerStarted","Data":"d3c284acce0572c876681788d9cabd8310ce022bb3ab1ef03c2ab9fa024c5800"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.925487 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" event={"ID":"38e216fe-b3a6-4e07-a537-79f9711617b2","Type":"ContainerStarted","Data":"4281cb4605dea07771ddac8f2afc02a183f67e8a20dea702c4c6c2df018ef9d7"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.925545 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" event={"ID":"38e216fe-b3a6-4e07-a537-79f9711617b2","Type":"ContainerStarted","Data":"9794ddbb7af89be9e2eb4ea54825f7a451ecdf0c76f687c4b036ccae850a5088"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.952863 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" event={"ID":"b7aebb0b-9f0d-43dc-ac9e-c275e9406825","Type":"ContainerStarted","Data":"0f294cc8a0bac5ff5fe206a7ddda6d51f77bab4c1b183a3e26b601f401d0eef5"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.952923 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" event={"ID":"b7aebb0b-9f0d-43dc-ac9e-c275e9406825","Type":"ContainerStarted","Data":"35df49b28a5d186885ad4960de9ee0a7ae6e5c1b0b4818b909dbca198d84f68a"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.954114 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.955406 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:01 crc kubenswrapper[4695]: E1126 13:26:01.959073 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.459040335 +0000 UTC m=+146.094865417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.983833 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-5sbb9" podStartSLOduration=124.983808111 podStartE2EDuration="2m4.983808111s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.903637994 +0000 UTC m=+145.539463076" watchObservedRunningTime="2025-11-26 13:26:01.983808111 +0000 UTC m=+145.619633193" Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.986641 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" event={"ID":"38941a1c-fbd3-4909-b5f3-2128059e9d95","Type":"ContainerStarted","Data":"7442031a6dfb85e71316a2efd64a2e356bbad5c2ab0e1d72be098f0c4377e121"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.986693 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" event={"ID":"38941a1c-fbd3-4909-b5f3-2128059e9d95","Type":"ContainerStarted","Data":"b2942ed564723e549881c6f1d12c5a19fc2080b79599a25e693083ec927af3fe"} Nov 26 13:26:01 crc kubenswrapper[4695]: I1126 13:26:01.987594 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.005135 4695 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dcdr7 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.005214 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" podUID="b7aebb0b-9f0d-43dc-ac9e-c275e9406825" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.021083 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4j967" event={"ID":"ec75b248-8586-4030-a552-86eba44b36fa","Type":"ContainerStarted","Data":"570c4b07d530ff4f5d54cc82091804dfe29226d7dd17f5f282c3b15188ca7bd1"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.021164 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4j967" event={"ID":"ec75b248-8586-4030-a552-86eba44b36fa","Type":"ContainerStarted","Data":"60ec86613ec6942f922e9584591f71fa55156ea89910108e13c0dccb3a69a69a"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.025335 4695 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4fqh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.025415 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" podUID="38941a1c-fbd3-4909-b5f3-2128059e9d95" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.037777 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-skqtf" podStartSLOduration=125.037757546 podStartE2EDuration="2m5.037757546s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:01.983935895 +0000 UTC m=+145.619760977" watchObservedRunningTime="2025-11-26 13:26:02.037757546 +0000 UTC m=+145.673582628" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.039091 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" podStartSLOduration=125.039083218 podStartE2EDuration="2m5.039083218s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.038470629 +0000 UTC m=+145.674295711" watchObservedRunningTime="2025-11-26 13:26:02.039083218 +0000 UTC m=+145.674908300" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.057728 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.059890 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.559867757 +0000 UTC m=+146.195692839 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.066832 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" event={"ID":"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd","Type":"ContainerStarted","Data":"e3276f9066e9924e91cd5afd6cd5d523d4b41ad7c983f3e24ea515cb93f86dbe"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.066890 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" event={"ID":"fe2b5f0a-b8d0-43e6-a669-3bf0fea31bcd","Type":"ContainerStarted","Data":"04b79199d672d39ea5413090415a3c0058ab4a25d4b0e70bfbbd6c2285629236"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.090659 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" event={"ID":"fc485407-1013-4868-83de-ec51c4cdb030","Type":"ContainerStarted","Data":"5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.090986 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.090997 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" event={"ID":"fc485407-1013-4868-83de-ec51c4cdb030","Type":"ContainerStarted","Data":"775b4bcaea9b28a1ecc028ea46571611bec91d9d1f03901bbde478d2e0b357ec"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.105036 4695 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dzxm6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.105095 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" podUID="fc485407-1013-4868-83de-ec51c4cdb030" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.120426 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" podStartSLOduration=125.120397393 podStartE2EDuration="2m5.120397393s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.116235149 +0000 UTC m=+145.752060231" watchObservedRunningTime="2025-11-26 13:26:02.120397393 +0000 UTC m=+145.756222475" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.130628 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" event={"ID":"b9a9b369-2369-4fe3-9568-ada564d1c2a6","Type":"ContainerStarted","Data":"b85cac8689fd2f9e52b918b29d8980e90364b106fb738cf729f66c8eecf4f3fa"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.136598 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:02 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:02 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:02 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.136659 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.146186 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" event={"ID":"6c61b802-51ae-4857-b4ee-6845ea44d69e","Type":"ContainerStarted","Data":"1269f4bace47d53e34d57c2050890dd25a6d81504d6501c187cafbe2c1f708e2"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.146239 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" event={"ID":"6c61b802-51ae-4857-b4ee-6845ea44d69e","Type":"ContainerStarted","Data":"a237acb718f40a1f41798de6651657ee5e938e2934ea13c9ec6cba18e7c1dfe4"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.159011 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.162378 4695 generic.go:334] "Generic (PLEG): container finished" podID="3618463d-9c33-4b4c-980f-1a91fca41cbe" containerID="9fd7b6fa91f146e05db842dc42d48280c9c0aa8a80502751a312dce4f760cf18" exitCode=0 Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.162428 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.662340241 +0000 UTC m=+146.298165343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.162488 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" event={"ID":"3618463d-9c33-4b4c-980f-1a91fca41cbe","Type":"ContainerDied","Data":"9fd7b6fa91f146e05db842dc42d48280c9c0aa8a80502751a312dce4f760cf18"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.189059 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lbn66" event={"ID":"eacdeb1b-093e-4110-b6a6-b2605bf167ee","Type":"ContainerStarted","Data":"6f688ee325424ca4bb6e54dc4815ff5e0568b506cc6dc7311cd1fa9723055eb4"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.210698 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" podStartSLOduration=125.210676536 podStartE2EDuration="2m5.210676536s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.20427154 +0000 UTC m=+145.840096612" watchObservedRunningTime="2025-11-26 13:26:02.210676536 +0000 UTC m=+145.846501618" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.235167 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" event={"ID":"941f9959-8397-4065-bbbf-79ef68aa8148","Type":"ContainerStarted","Data":"36e1649433405908cf404ae24208dfff603eccd0e850df6380e189ae0a8c075c"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.235257 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" event={"ID":"941f9959-8397-4065-bbbf-79ef68aa8148","Type":"ContainerStarted","Data":"142512b1f3a883eb886e694ba05a0e083e1134267c8267b86822488d2685c767"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.247184 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" event={"ID":"334b6e1b-3607-49f1-9c30-3a22590c2bfa","Type":"ContainerStarted","Data":"d5a3afcb32daae989896497cbeb2fed96afd733a12c261cb467eca8111d01154"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.257823 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" event={"ID":"062a5bcb-3e66-473a-b594-5cf089a9fb73","Type":"ContainerStarted","Data":"13ac321ff249c27378d019d5c35efd4da0597665babe94c89432ff7d38abb333"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.265282 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.266009 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.765995464 +0000 UTC m=+146.401820546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.286816 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" event={"ID":"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd","Type":"ContainerStarted","Data":"822217c3b493435d40bf2c8222145d1f636f1684975809734db742d6dac70851"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.286870 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" event={"ID":"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd","Type":"ContainerStarted","Data":"15222311bb3a4ba0ef2512fe7e7035a46360077c95a5c2122563d6c1f8b10703"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.317423 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-65ddv" event={"ID":"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736","Type":"ContainerStarted","Data":"6fb6113ebda13a3f1647b89ce2307e8cb902ab23574b8961491fab2dc9677b51"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.317495 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-65ddv" event={"ID":"dd6ca3ba-1c2b-43b4-b324-a197ad4ed736","Type":"ContainerStarted","Data":"3d9bb757fcbe9ef5a4e56210475b548b69a15e36643ccfee09adcd31ffba26e8"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.355624 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-lg6fv" podStartSLOduration=125.355598915 podStartE2EDuration="2m5.355598915s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.269060623 +0000 UTC m=+145.904885705" watchObservedRunningTime="2025-11-26 13:26:02.355598915 +0000 UTC m=+145.991423997" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.366210 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.369340 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.869316766 +0000 UTC m=+146.505141848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.404338 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" event={"ID":"1a4dc693-724a-4ddd-a4c4-8d213f4c1a1f","Type":"ContainerStarted","Data":"879e24201bc90c650e87cfa086a7182a7f25e735b59dabf3e0e3f73f078a0e58"} Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.407807 4695 patch_prober.go:28] interesting pod/downloads-7954f5f757-x97bq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.407902 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x97bq" podUID="6b0b4bb2-6319-4f1b-ba1c-80256970147d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.419071 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-bcgtq" podStartSLOduration=125.419051196 podStartE2EDuration="2m5.419051196s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.356287268 +0000 UTC m=+145.992112350" watchObservedRunningTime="2025-11-26 13:26:02.419051196 +0000 UTC m=+146.054876278" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.420113 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hrqkb" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.476056 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.482399 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:02.982373992 +0000 UTC m=+146.618199074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.535302 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.535881 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.541330 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.574724 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fd6bm" podStartSLOduration=125.574697531 podStartE2EDuration="2m5.574697531s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.51218316 +0000 UTC m=+146.148008242" watchObservedRunningTime="2025-11-26 13:26:02.574697531 +0000 UTC m=+146.210522613" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.581798 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.582369 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.082331366 +0000 UTC m=+146.718156448 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.675752 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-rbvpt" podStartSLOduration=125.675727369 podStartE2EDuration="2m5.675727369s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.675556864 +0000 UTC m=+146.311381946" watchObservedRunningTime="2025-11-26 13:26:02.675727369 +0000 UTC m=+146.311552451" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.678361 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-64nqf" podStartSLOduration=125.678336903 podStartE2EDuration="2m5.678336903s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.580962562 +0000 UTC m=+146.216787644" watchObservedRunningTime="2025-11-26 13:26:02.678336903 +0000 UTC m=+146.314161985" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.683096 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.683530 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.183517429 +0000 UTC m=+146.819342511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.721973 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" podStartSLOduration=125.721952575 podStartE2EDuration="2m5.721952575s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.716470869 +0000 UTC m=+146.352295951" watchObservedRunningTime="2025-11-26 13:26:02.721952575 +0000 UTC m=+146.357777657" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.786300 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.786851 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.286831731 +0000 UTC m=+146.922656803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.810888 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-65ddv" podStartSLOduration=7.810856534 podStartE2EDuration="7.810856534s" podCreationTimestamp="2025-11-26 13:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.764966828 +0000 UTC m=+146.400791910" watchObservedRunningTime="2025-11-26 13:26:02.810856534 +0000 UTC m=+146.446681616" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.841624 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6w4lm" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.888230 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.888589 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.388577473 +0000 UTC m=+147.024402555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.935061 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4s6xh" podStartSLOduration=125.935043327 podStartE2EDuration="2m5.935043327s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:02.934946044 +0000 UTC m=+146.570771126" watchObservedRunningTime="2025-11-26 13:26:02.935043327 +0000 UTC m=+146.570868409" Nov 26 13:26:02 crc kubenswrapper[4695]: I1126 13:26:02.990554 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:02 crc kubenswrapper[4695]: E1126 13:26:02.990985 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.490965845 +0000 UTC m=+147.126790927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.092209 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.092537 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.592525051 +0000 UTC m=+147.228350133 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.129491 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:03 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:03 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:03 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.129555 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.193606 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.194074 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.694055776 +0000 UTC m=+147.329880858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.246891 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.296001 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.296806 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.796786679 +0000 UTC m=+147.432611761 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.398232 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.398565 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:03.898546831 +0000 UTC m=+147.534371913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.415387 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-fjvrv" event={"ID":"ff48eb97-fabb-4faa-b4dc-25aa1897d5dd","Type":"ContainerStarted","Data":"0b1ef31b571700278620371c4cdd4196f60284999fa5be8813262f0c9c1b8c2b"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.417766 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" event={"ID":"38e216fe-b3a6-4e07-a537-79f9711617b2","Type":"ContainerStarted","Data":"c7b5bfdbd3378a533e6f140ef81247a1a984fafa985dffa7d5fb2c1015f89ec8"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.420839 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" event={"ID":"3618463d-9c33-4b4c-980f-1a91fca41cbe","Type":"ContainerStarted","Data":"bf11ffdea9b41233391ad7afebf360b4061ee23fbc52dc29aaa03b85bc25b7da"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.421284 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.423719 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-4j967" event={"ID":"ec75b248-8586-4030-a552-86eba44b36fa","Type":"ContainerStarted","Data":"afa998859f1c099c78ef81293c8e642b4c0b6bb781c2c0cd8fe11729c2a2fe87"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.424103 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-4j967" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.425911 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" event={"ID":"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5","Type":"ContainerStarted","Data":"b2e4ca72be0065f663d2ce55733626328a2819c5072e508776dc174e04e136b6"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.431745 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" event={"ID":"28dcf9ce-1671-4d92-9eab-60c4225eb208","Type":"ContainerStarted","Data":"5b38685c4845b4b1b9ad93577ccd837d2236b9e609414a0c039487b47919791c"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.443473 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" event={"ID":"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85","Type":"ContainerStarted","Data":"df8b66a214f84959cebbda3c4d83bd3c9fa74c3b7ced36156a744c335d06ea5a"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.443520 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" event={"ID":"f4397fbb-62ef-4e2f-9ace-6c76a6e49f85","Type":"ContainerStarted","Data":"23d48c30c59a8f5d1801559ff268c8acfc99ea50524eef3dc2911cc51ae9909b"} Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.445183 4695 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4fqh container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.445219 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" podUID="38941a1c-fbd3-4909-b5f3-2128059e9d95" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.445335 4695 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-dzxm6 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.445371 4695 patch_prober.go:28] interesting pod/downloads-7954f5f757-x97bq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.445403 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x97bq" podUID="6b0b4bb2-6319-4f1b-ba1c-80256970147d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.445427 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" podUID="fc485407-1013-4868-83de-ec51c4cdb030" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.455392 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2zqsx" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.491720 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" podStartSLOduration=127.491688016 podStartE2EDuration="2m7.491688016s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:03.488418691 +0000 UTC m=+147.124243773" watchObservedRunningTime="2025-11-26 13:26:03.491688016 +0000 UTC m=+147.127513098" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.492449 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-t7whs" podStartSLOduration=126.49243948 podStartE2EDuration="2m6.49243948s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:03.458213729 +0000 UTC m=+147.094038811" watchObservedRunningTime="2025-11-26 13:26:03.49243948 +0000 UTC m=+147.128264562" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.499411 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.502088 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.00206966 +0000 UTC m=+147.637894742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.601515 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.601848 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.101829167 +0000 UTC m=+147.737654249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.615026 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-4j967" podStartSLOduration=8.61500633 podStartE2EDuration="8.61500633s" podCreationTimestamp="2025-11-26 13:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:03.613815563 +0000 UTC m=+147.249640645" watchObservedRunningTime="2025-11-26 13:26:03.61500633 +0000 UTC m=+147.250831412" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.676744 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" podStartSLOduration=126.676714175 podStartE2EDuration="2m6.676714175s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:03.676211559 +0000 UTC m=+147.312036641" watchObservedRunningTime="2025-11-26 13:26:03.676714175 +0000 UTC m=+147.312539257" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.695715 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-46j67" podStartSLOduration=126.695700326 podStartE2EDuration="2m6.695700326s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:03.695059975 +0000 UTC m=+147.330885057" watchObservedRunningTime="2025-11-26 13:26:03.695700326 +0000 UTC m=+147.331525408" Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.703625 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.703984 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.203969082 +0000 UTC m=+147.839794164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.804638 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.804809 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.304779313 +0000 UTC m=+147.940604395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.805314 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.805827 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.305804035 +0000 UTC m=+147.941629117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.906252 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.906470 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.406440961 +0000 UTC m=+148.042266043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:03 crc kubenswrapper[4695]: I1126 13:26:03.906591 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:03 crc kubenswrapper[4695]: E1126 13:26:03.907217 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.407184765 +0000 UTC m=+148.043009847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.007249 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.007748 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.507717718 +0000 UTC m=+148.143542790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.109191 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.109605 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.609588284 +0000 UTC m=+148.245413366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.128731 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:04 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:04 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:04 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.128830 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.210084 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.210267 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.71023937 +0000 UTC m=+148.346064452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.210400 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.210700 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.710692734 +0000 UTC m=+148.346517816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.313986 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.314629 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.814605436 +0000 UTC m=+148.450430528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.352839 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dcdr7" Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.415565 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.415999 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:04.915963404 +0000 UTC m=+148.551788486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.451160 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" event={"ID":"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5","Type":"ContainerStarted","Data":"2196807532bd6ad6088d25e26cd59e949f169bb353da129808425d115194b395"} Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.516949 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.518091 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.018034846 +0000 UTC m=+148.653859938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.618780 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.619147 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.119134867 +0000 UTC m=+148.754959949 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.720016 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.720370 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.220339892 +0000 UTC m=+148.856164964 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.820974 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.821383 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.32137085 +0000 UTC m=+148.957195932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.853292 4695 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.921824 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.922196 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.422159031 +0000 UTC m=+149.057984113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:04 crc kubenswrapper[4695]: I1126 13:26:04.922586 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:04 crc kubenswrapper[4695]: E1126 13:26:04.922953 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.422940615 +0000 UTC m=+149.058765697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.024035 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:05 crc kubenswrapper[4695]: E1126 13:26:05.024226 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.524195681 +0000 UTC m=+149.160020763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.024572 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:05 crc kubenswrapper[4695]: E1126 13:26:05.024916 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.524901004 +0000 UTC m=+149.160726086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.125824 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:05 crc kubenswrapper[4695]: E1126 13:26:05.126013 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.625976174 +0000 UTC m=+149.261801256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.126076 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.126131 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.126167 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.126199 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:05 crc kubenswrapper[4695]: E1126 13:26:05.126727 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.626715297 +0000 UTC m=+149.262540379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.126911 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.127498 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.134246 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:05 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:05 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:05 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.134306 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.135268 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.144185 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.156401 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.228542 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:05 crc kubenswrapper[4695]: E1126 13:26:05.229181 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.729153371 +0000 UTC m=+149.364978453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.329926 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:05 crc kubenswrapper[4695]: E1126 13:26:05.330452 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-26 13:26:05.830429268 +0000 UTC m=+149.466254350 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-dl9mv" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.336444 4695 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-26T13:26:04.853371419Z","Handler":null,"Name":""} Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.340950 4695 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.340994 4695 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.387857 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.398553 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.419816 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.432029 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.441495 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.483506 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" event={"ID":"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5","Type":"ContainerStarted","Data":"27c39a7a5c6635b79495bf846a950cd278ebdbbce5bdbec4b9a51a78dd46108b"} Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.483858 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" event={"ID":"a1d9a9d9-f2d4-4416-a90e-4f56377fe2c5","Type":"ContainerStarted","Data":"df78212b93a44eb1847bb85b6f4d12b50dc2adab179ad1a5407afc80b9ddc047"} Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.533580 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.546403 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-lwlnn" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.553765 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5ddpz" podStartSLOduration=10.553735588 podStartE2EDuration="10.553735588s" podCreationTimestamp="2025-11-26 13:25:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:05.533327932 +0000 UTC m=+149.169153014" watchObservedRunningTime="2025-11-26 13:26:05.553735588 +0000 UTC m=+149.189560670" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.595238 4695 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.595285 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.596146 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84jk7"] Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.597399 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.602011 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.604255 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84jk7"] Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.647107 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr95c\" (UniqueName: \"kubernetes.io/projected/1f6308de-c770-4097-807a-ea8d1fd17151-kube-api-access-vr95c\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.647196 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-utilities\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.647250 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-catalog-content\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.744019 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9tvmf"] Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.744911 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.747713 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.748860 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr95c\" (UniqueName: \"kubernetes.io/projected/1f6308de-c770-4097-807a-ea8d1fd17151-kube-api-access-vr95c\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.748946 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-utilities\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.749009 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-catalog-content\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.750260 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-utilities\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.750394 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-catalog-content\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.774511 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr95c\" (UniqueName: \"kubernetes.io/projected/1f6308de-c770-4097-807a-ea8d1fd17151-kube-api-access-vr95c\") pod \"community-operators-84jk7\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.777670 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tvmf"] Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.850338 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-catalog-content\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.850423 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gxdm\" (UniqueName: \"kubernetes.io/projected/838cffa7-c983-4531-9b48-8397076df516-kube-api-access-5gxdm\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.850444 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-utilities\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.951859 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-catalog-content\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.951936 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gxdm\" (UniqueName: \"kubernetes.io/projected/838cffa7-c983-4531-9b48-8397076df516-kube-api-access-5gxdm\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.951955 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-utilities\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.952394 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-utilities\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.953009 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-catalog-content\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.954324 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lkqtp"] Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.955556 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.975782 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkqtp"] Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.991974 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gxdm\" (UniqueName: \"kubernetes.io/projected/838cffa7-c983-4531-9b48-8397076df516-kube-api-access-5gxdm\") pod \"certified-operators-9tvmf\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:05 crc kubenswrapper[4695]: I1126 13:26:05.998490 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.053235 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-catalog-content\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.053307 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlhnf\" (UniqueName: \"kubernetes.io/projected/53142449-ccfa-4cee-a77e-c1a4f9178691-kube-api-access-hlhnf\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.053544 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-utilities\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.075263 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.140514 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:06 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:06 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:06 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.140615 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.155321 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlhnf\" (UniqueName: \"kubernetes.io/projected/53142449-ccfa-4cee-a77e-c1a4f9178691-kube-api-access-hlhnf\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.155803 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-utilities\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.155849 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-catalog-content\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.156491 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-catalog-content\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.156774 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-utilities\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.156897 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-k87lh"] Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.159083 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.193505 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k87lh"] Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.206459 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlhnf\" (UniqueName: \"kubernetes.io/projected/53142449-ccfa-4cee-a77e-c1a4f9178691-kube-api-access-hlhnf\") pod \"community-operators-lkqtp\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: W1126 13:26:06.212742 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-c4753b1add251743d69af2d09cf892b085d7b79311dd6c43b74bc7f7359af397 WatchSource:0}: Error finding container c4753b1add251743d69af2d09cf892b085d7b79311dd6c43b74bc7f7359af397: Status 404 returned error can't find the container with id c4753b1add251743d69af2d09cf892b085d7b79311dd6c43b74bc7f7359af397 Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.236041 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-dl9mv\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.270235 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.272530 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d5ds\" (UniqueName: \"kubernetes.io/projected/45f264e1-d601-4586-81f1-1ddbb10c5bc1-kube-api-access-2d5ds\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.272626 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-utilities\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.272660 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-catalog-content\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.374252 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-utilities\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.374592 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-catalog-content\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.374652 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d5ds\" (UniqueName: \"kubernetes.io/projected/45f264e1-d601-4586-81f1-1ddbb10c5bc1-kube-api-access-2d5ds\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.375420 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-catalog-content\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.375631 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-utilities\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.397391 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.397637 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.401422 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d5ds\" (UniqueName: \"kubernetes.io/projected/45f264e1-d601-4586-81f1-1ddbb10c5bc1-kube-api-access-2d5ds\") pod \"certified-operators-k87lh\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.434424 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.499000 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7a841da9e3fb3b3739b28212965dc07ed8f2389a31c9af37c745b22bae0f4e96"} Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.499120 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c4753b1add251743d69af2d09cf892b085d7b79311dd6c43b74bc7f7359af397"} Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.499315 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.515451 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"334548a96a6dec496abf49faa7d83e0521315d52ce72b16282199d1d2d948740"} Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.515523 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"516b47506afee34277321f2a92ab6a80fe250cc932f9dc1dd6af24ea13215d76"} Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.519664 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a5a74e265097ea0ea701ffcc8cf73f87e3c5bdc2f3486e0209102af35f03a2e4"} Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.519720 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a739c4c03e8e7901f28f3832e20d8cd81da251ab11c4e3ea4f5cd2556e34f6be"} Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.529178 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84jk7"] Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.582462 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9tvmf"] Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.608055 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.625831 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkqtp"] Nov 26 13:26:06 crc kubenswrapper[4695]: W1126 13:26:06.663206 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53142449_ccfa_4cee_a77e_c1a4f9178691.slice/crio-4553011527b7f6ebcc83d8f9c6128e43d0247bc0916b0fa140f6605e2e480834 WatchSource:0}: Error finding container 4553011527b7f6ebcc83d8f9c6128e43d0247bc0916b0fa140f6605e2e480834: Status 404 returned error can't find the container with id 4553011527b7f6ebcc83d8f9c6128e43d0247bc0916b0fa140f6605e2e480834 Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.747424 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dl9mv"] Nov 26 13:26:06 crc kubenswrapper[4695]: I1126 13:26:06.841948 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-k87lh"] Nov 26 13:26:06 crc kubenswrapper[4695]: W1126 13:26:06.864841 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45f264e1_d601_4586_81f1_1ddbb10c5bc1.slice/crio-52a4db40035f53525e12711e827d628ba12c967c0d68b63d5de446bc91bad273 WatchSource:0}: Error finding container 52a4db40035f53525e12711e827d628ba12c967c0d68b63d5de446bc91bad273: Status 404 returned error can't find the container with id 52a4db40035f53525e12711e827d628ba12c967c0d68b63d5de446bc91bad273 Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.131637 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:07 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:07 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:07 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.131698 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.172884 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.301886 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.302841 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.309657 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.309952 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.321646 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.389890 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.389960 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.491587 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.491639 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.492110 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.520522 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.525814 4695 generic.go:334] "Generic (PLEG): container finished" podID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerID="02d318d4744b2263e64b81b9693d85e5fbd57c4a1a865ddd1e21733ad766ef77" exitCode=0 Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.525893 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k87lh" event={"ID":"45f264e1-d601-4586-81f1-1ddbb10c5bc1","Type":"ContainerDied","Data":"02d318d4744b2263e64b81b9693d85e5fbd57c4a1a865ddd1e21733ad766ef77"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.525923 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k87lh" event={"ID":"45f264e1-d601-4586-81f1-1ddbb10c5bc1","Type":"ContainerStarted","Data":"52a4db40035f53525e12711e827d628ba12c967c0d68b63d5de446bc91bad273"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.528497 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.528602 4695 generic.go:334] "Generic (PLEG): container finished" podID="b9a9b369-2369-4fe3-9568-ada564d1c2a6" containerID="b85cac8689fd2f9e52b918b29d8980e90364b106fb738cf729f66c8eecf4f3fa" exitCode=0 Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.528743 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" event={"ID":"b9a9b369-2369-4fe3-9568-ada564d1c2a6","Type":"ContainerDied","Data":"b85cac8689fd2f9e52b918b29d8980e90364b106fb738cf729f66c8eecf4f3fa"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.531779 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" event={"ID":"bbdc37eb-6673-45f1-8d42-ac1e51e041a3","Type":"ContainerStarted","Data":"21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.531845 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" event={"ID":"bbdc37eb-6673-45f1-8d42-ac1e51e041a3","Type":"ContainerStarted","Data":"9f69d419568bb4a673898b8dc211ef3f82ee7b332bb47dbbb5233c67d5f652df"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.532474 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.533695 4695 generic.go:334] "Generic (PLEG): container finished" podID="1f6308de-c770-4097-807a-ea8d1fd17151" containerID="a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a" exitCode=0 Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.533741 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84jk7" event={"ID":"1f6308de-c770-4097-807a-ea8d1fd17151","Type":"ContainerDied","Data":"a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.533760 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84jk7" event={"ID":"1f6308de-c770-4097-807a-ea8d1fd17151","Type":"ContainerStarted","Data":"230d3c9e861599fce88ef1bfb5ecd348773ced85eaa83bf3b129d97993c160ab"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.543362 4695 generic.go:334] "Generic (PLEG): container finished" podID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerID="a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676" exitCode=0 Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.543473 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqtp" event={"ID":"53142449-ccfa-4cee-a77e-c1a4f9178691","Type":"ContainerDied","Data":"a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.543506 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqtp" event={"ID":"53142449-ccfa-4cee-a77e-c1a4f9178691","Type":"ContainerStarted","Data":"4553011527b7f6ebcc83d8f9c6128e43d0247bc0916b0fa140f6605e2e480834"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.549494 4695 generic.go:334] "Generic (PLEG): container finished" podID="838cffa7-c983-4531-9b48-8397076df516" containerID="440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736" exitCode=0 Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.549769 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tvmf" event={"ID":"838cffa7-c983-4531-9b48-8397076df516","Type":"ContainerDied","Data":"440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.549830 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tvmf" event={"ID":"838cffa7-c983-4531-9b48-8397076df516","Type":"ContainerStarted","Data":"8888cda98eac5af0755bb3c925f008ca9296e58c8938d57aefbd3586e53942ab"} Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.569883 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sbmxm"] Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.571031 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.576322 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.588882 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" podStartSLOduration=130.588858585 podStartE2EDuration="2m10.588858585s" podCreationTimestamp="2025-11-26 13:23:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:07.57657246 +0000 UTC m=+151.212397542" watchObservedRunningTime="2025-11-26 13:26:07.588858585 +0000 UTC m=+151.224683667" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.591371 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbmxm"] Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.619015 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.677010 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.677050 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.682194 4695 patch_prober.go:28] interesting pod/downloads-7954f5f757-x97bq container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.682228 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-x97bq" podUID="6b0b4bb2-6319-4f1b-ba1c-80256970147d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.682406 4695 patch_prober.go:28] interesting pod/downloads-7954f5f757-x97bq container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.682473 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-x97bq" podUID="6b0b4bb2-6319-4f1b-ba1c-80256970147d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.694314 4695 patch_prober.go:28] interesting pod/apiserver-76f77b778f-jxwsj container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]log ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]etcd ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/generic-apiserver-start-informers ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/max-in-flight-filter ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 26 13:26:07 crc kubenswrapper[4695]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/project.openshift.io-projectcache ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/openshift.io-startinformers ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 26 13:26:07 crc kubenswrapper[4695]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 26 13:26:07 crc kubenswrapper[4695]: livez check failed Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.694386 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" podUID="f4397fbb-62ef-4e2f-9ace-6c76a6e49f85" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.695389 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6j9v\" (UniqueName: \"kubernetes.io/projected/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-kube-api-access-r6j9v\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.695428 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-catalog-content\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.695462 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-utilities\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.797235 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6j9v\" (UniqueName: \"kubernetes.io/projected/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-kube-api-access-r6j9v\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.797759 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-catalog-content\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.797824 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-utilities\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.798812 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-utilities\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.799726 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-catalog-content\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.803457 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.803503 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.806318 4695 patch_prober.go:28] interesting pod/console-f9d7485db-4fqcc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.806376 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4fqcc" podUID="d84d0827-d7fe-42eb-adbe-eda35247c26c" containerName="console" probeResult="failure" output="Get \"https://10.217.0.27:8443/health\": dial tcp 10.217.0.27:8443: connect: connection refused" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.821982 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6j9v\" (UniqueName: \"kubernetes.io/projected/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-kube-api-access-r6j9v\") pod \"redhat-marketplace-sbmxm\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.898603 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.942516 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.946743 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmjm"] Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.947914 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:07 crc kubenswrapper[4695]: I1126 13:26:07.962909 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmjm"] Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.001093 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7kdd\" (UniqueName: \"kubernetes.io/projected/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-kube-api-access-t7kdd\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.001127 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-utilities\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.001142 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-catalog-content\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.105974 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7kdd\" (UniqueName: \"kubernetes.io/projected/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-kube-api-access-t7kdd\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.106396 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-catalog-content\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.106414 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-utilities\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.107421 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-utilities\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.107509 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-catalog-content\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.119850 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbmxm"] Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.126405 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.132955 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7kdd\" (UniqueName: \"kubernetes.io/projected/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-kube-api-access-t7kdd\") pod \"redhat-marketplace-hdmjm\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.134202 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:08 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:08 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:08 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.134241 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.277776 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.516642 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmjm"] Nov 26 13:26:08 crc kubenswrapper[4695]: W1126 13:26:08.524844 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46e8211f_e429_46c3_9dbf_2dd2d6490dfd.slice/crio-13fd09a5a6c40b498a7cfb54eedab948541b3b880305eb71bfcf7e768a7bd98b WatchSource:0}: Error finding container 13fd09a5a6c40b498a7cfb54eedab948541b3b880305eb71bfcf7e768a7bd98b: Status 404 returned error can't find the container with id 13fd09a5a6c40b498a7cfb54eedab948541b3b880305eb71bfcf7e768a7bd98b Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.557340 4695 generic.go:334] "Generic (PLEG): container finished" podID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerID="de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d" exitCode=0 Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.557420 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbmxm" event={"ID":"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb","Type":"ContainerDied","Data":"de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d"} Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.557449 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbmxm" event={"ID":"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb","Type":"ContainerStarted","Data":"aaf44565da6a03d35ca2558e627a1144ed1d8a83504f237289ebd8298d73bcca"} Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.559536 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmjm" event={"ID":"46e8211f-e429-46c3-9dbf-2dd2d6490dfd","Type":"ContainerStarted","Data":"13fd09a5a6c40b498a7cfb54eedab948541b3b880305eb71bfcf7e768a7bd98b"} Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.561702 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4","Type":"ContainerStarted","Data":"2af394a278595c0b2765fa7aff84cb9b4274af1814ef008b2b67f2efc5d3b200"} Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.561740 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4","Type":"ContainerStarted","Data":"2c139c267b8de47988cd05fad3e4c61886a01634f7252d743af20c229baa04b7"} Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.561937 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.615860 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=1.615837447 podStartE2EDuration="1.615837447s" podCreationTimestamp="2025-11-26 13:26:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:08.607017673 +0000 UTC m=+152.242842795" watchObservedRunningTime="2025-11-26 13:26:08.615837447 +0000 UTC m=+152.251662549" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.751573 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9p6q"] Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.752926 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.754615 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.754876 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9p6q"] Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.762901 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4fqh" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.805063 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.817922 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-catalog-content\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.817995 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-utilities\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.818124 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df292\" (UniqueName: \"kubernetes.io/projected/4ede3477-0b5e-43ba-a074-244304777695-kube-api-access-df292\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.922954 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fm4pr\" (UniqueName: \"kubernetes.io/projected/b9a9b369-2369-4fe3-9568-ada564d1c2a6-kube-api-access-fm4pr\") pod \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.923124 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a9b369-2369-4fe3-9568-ada564d1c2a6-secret-volume\") pod \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.923206 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a9b369-2369-4fe3-9568-ada564d1c2a6-config-volume\") pod \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\" (UID: \"b9a9b369-2369-4fe3-9568-ada564d1c2a6\") " Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.923462 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df292\" (UniqueName: \"kubernetes.io/projected/4ede3477-0b5e-43ba-a074-244304777695-kube-api-access-df292\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.923510 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-catalog-content\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.923565 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-utilities\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.923584 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9a9b369-2369-4fe3-9568-ada564d1c2a6-config-volume" (OuterVolumeSpecName: "config-volume") pod "b9a9b369-2369-4fe3-9568-ada564d1c2a6" (UID: "b9a9b369-2369-4fe3-9568-ada564d1c2a6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.924100 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-utilities\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.924550 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-catalog-content\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.931081 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9a9b369-2369-4fe3-9568-ada564d1c2a6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b9a9b369-2369-4fe3-9568-ada564d1c2a6" (UID: "b9a9b369-2369-4fe3-9568-ada564d1c2a6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.931359 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9a9b369-2369-4fe3-9568-ada564d1c2a6-kube-api-access-fm4pr" (OuterVolumeSpecName: "kube-api-access-fm4pr") pod "b9a9b369-2369-4fe3-9568-ada564d1c2a6" (UID: "b9a9b369-2369-4fe3-9568-ada564d1c2a6"). InnerVolumeSpecName "kube-api-access-fm4pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:26:08 crc kubenswrapper[4695]: I1126 13:26:08.941737 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df292\" (UniqueName: \"kubernetes.io/projected/4ede3477-0b5e-43ba-a074-244304777695-kube-api-access-df292\") pod \"redhat-operators-k9p6q\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.025278 4695 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b9a9b369-2369-4fe3-9568-ada564d1c2a6-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.025314 4695 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9a9b369-2369-4fe3-9568-ada564d1c2a6-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.025324 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fm4pr\" (UniqueName: \"kubernetes.io/projected/b9a9b369-2369-4fe3-9568-ada564d1c2a6-kube-api-access-fm4pr\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.080683 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.137215 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:09 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:09 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:09 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.137273 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.142905 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-82p2l"] Nov 26 13:26:09 crc kubenswrapper[4695]: E1126 13:26:09.143149 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9a9b369-2369-4fe3-9568-ada564d1c2a6" containerName="collect-profiles" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.143162 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9a9b369-2369-4fe3-9568-ada564d1c2a6" containerName="collect-profiles" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.143260 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9a9b369-2369-4fe3-9568-ada564d1c2a6" containerName="collect-profiles" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.144070 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.154025 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82p2l"] Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.236109 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-catalog-content\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.236188 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-utilities\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.236252 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrdlk\" (UniqueName: \"kubernetes.io/projected/d7b38b12-b4e5-43d3-b7b7-b57169d06241-kube-api-access-wrdlk\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.337854 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-catalog-content\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.337939 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-utilities\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.337997 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrdlk\" (UniqueName: \"kubernetes.io/projected/d7b38b12-b4e5-43d3-b7b7-b57169d06241-kube-api-access-wrdlk\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.338457 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-catalog-content\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.338836 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-utilities\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.360230 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrdlk\" (UniqueName: \"kubernetes.io/projected/d7b38b12-b4e5-43d3-b7b7-b57169d06241-kube-api-access-wrdlk\") pod \"redhat-operators-82p2l\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.489269 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.530978 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9p6q"] Nov 26 13:26:09 crc kubenswrapper[4695]: W1126 13:26:09.544439 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ede3477_0b5e_43ba_a074_244304777695.slice/crio-c94d82a937456a1b0f51ac6e8b110bf68211778a118b3eb116150262d59c27da WatchSource:0}: Error finding container c94d82a937456a1b0f51ac6e8b110bf68211778a118b3eb116150262d59c27da: Status 404 returned error can't find the container with id c94d82a937456a1b0f51ac6e8b110bf68211778a118b3eb116150262d59c27da Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.590274 4695 generic.go:334] "Generic (PLEG): container finished" podID="5aba71ef-5d0a-43b4-80b2-0a5c17408ed4" containerID="2af394a278595c0b2765fa7aff84cb9b4274af1814ef008b2b67f2efc5d3b200" exitCode=0 Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.590319 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4","Type":"ContainerDied","Data":"2af394a278595c0b2765fa7aff84cb9b4274af1814ef008b2b67f2efc5d3b200"} Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.592286 4695 generic.go:334] "Generic (PLEG): container finished" podID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerID="05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97" exitCode=0 Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.592336 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmjm" event={"ID":"46e8211f-e429-46c3-9dbf-2dd2d6490dfd","Type":"ContainerDied","Data":"05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97"} Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.599377 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9p6q" event={"ID":"4ede3477-0b5e-43ba-a074-244304777695","Type":"ContainerStarted","Data":"c94d82a937456a1b0f51ac6e8b110bf68211778a118b3eb116150262d59c27da"} Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.613854 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.613871 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8" event={"ID":"b9a9b369-2369-4fe3-9568-ada564d1c2a6","Type":"ContainerDied","Data":"fec47fb54d77e787792bdf5cae0f380336bc52d99e7f24ba337ea66adfccc5bc"} Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.614804 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fec47fb54d77e787792bdf5cae0f380336bc52d99e7f24ba337ea66adfccc5bc" Nov 26 13:26:09 crc kubenswrapper[4695]: I1126 13:26:09.839914 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-82p2l"] Nov 26 13:26:09 crc kubenswrapper[4695]: W1126 13:26:09.844403 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7b38b12_b4e5_43d3_b7b7_b57169d06241.slice/crio-c27b55fcbeb13488be21fb0681f07eca10c20c4a65467a3389c483e9d2bde2e0 WatchSource:0}: Error finding container c27b55fcbeb13488be21fb0681f07eca10c20c4a65467a3389c483e9d2bde2e0: Status 404 returned error can't find the container with id c27b55fcbeb13488be21fb0681f07eca10c20c4a65467a3389c483e9d2bde2e0 Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.130154 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:10 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:10 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:10 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.130228 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.624226 4695 generic.go:334] "Generic (PLEG): container finished" podID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerID="72b0ca282ee59346479756338c7170fefd2e549a552b7e4a40fd461cc705319f" exitCode=0 Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.624272 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82p2l" event={"ID":"d7b38b12-b4e5-43d3-b7b7-b57169d06241","Type":"ContainerDied","Data":"72b0ca282ee59346479756338c7170fefd2e549a552b7e4a40fd461cc705319f"} Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.624328 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82p2l" event={"ID":"d7b38b12-b4e5-43d3-b7b7-b57169d06241","Type":"ContainerStarted","Data":"c27b55fcbeb13488be21fb0681f07eca10c20c4a65467a3389c483e9d2bde2e0"} Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.660875 4695 generic.go:334] "Generic (PLEG): container finished" podID="4ede3477-0b5e-43ba-a074-244304777695" containerID="071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80" exitCode=0 Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.661130 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9p6q" event={"ID":"4ede3477-0b5e-43ba-a074-244304777695","Type":"ContainerDied","Data":"071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80"} Nov 26 13:26:10 crc kubenswrapper[4695]: I1126 13:26:10.969159 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.061382 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kube-api-access\") pod \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.061469 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kubelet-dir\") pod \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\" (UID: \"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4\") " Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.061620 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5aba71ef-5d0a-43b4-80b2-0a5c17408ed4" (UID: "5aba71ef-5d0a-43b4-80b2-0a5c17408ed4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.061806 4695 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.066372 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5aba71ef-5d0a-43b4-80b2-0a5c17408ed4" (UID: "5aba71ef-5d0a-43b4-80b2-0a5c17408ed4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.128520 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:11 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:11 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:11 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.128659 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.163124 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aba71ef-5d0a-43b4-80b2-0a5c17408ed4-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.669239 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.669163 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5aba71ef-5d0a-43b4-80b2-0a5c17408ed4","Type":"ContainerDied","Data":"2c139c267b8de47988cd05fad3e4c61886a01634f7252d743af20c229baa04b7"} Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.669786 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c139c267b8de47988cd05fad3e4c61886a01634f7252d743af20c229baa04b7" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.975303 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 13:26:11 crc kubenswrapper[4695]: E1126 13:26:11.975763 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aba71ef-5d0a-43b4-80b2-0a5c17408ed4" containerName="pruner" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.975780 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aba71ef-5d0a-43b4-80b2-0a5c17408ed4" containerName="pruner" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.975904 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aba71ef-5d0a-43b4-80b2-0a5c17408ed4" containerName="pruner" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.976708 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.978703 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 13:26:11 crc kubenswrapper[4695]: I1126 13:26:11.979274 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:11.982356 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.075732 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df1c7c05-23e1-4092-b6dc-9832a392eca6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.075793 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df1c7c05-23e1-4092-b6dc-9832a392eca6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.128796 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:12 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:12 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:12 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.128876 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.177205 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df1c7c05-23e1-4092-b6dc-9832a392eca6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.177322 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df1c7c05-23e1-4092-b6dc-9832a392eca6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.177613 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df1c7c05-23e1-4092-b6dc-9832a392eca6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.211822 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df1c7c05-23e1-4092-b6dc-9832a392eca6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.329492 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.681178 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:26:12 crc kubenswrapper[4695]: I1126 13:26:12.686924 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-jxwsj" Nov 26 13:26:13 crc kubenswrapper[4695]: I1126 13:26:13.128778 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:13 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:13 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:13 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:13 crc kubenswrapper[4695]: I1126 13:26:13.128848 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:13 crc kubenswrapper[4695]: I1126 13:26:13.595918 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-4j967" Nov 26 13:26:14 crc kubenswrapper[4695]: I1126 13:26:14.128407 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:14 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:14 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:14 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:14 crc kubenswrapper[4695]: I1126 13:26:14.128648 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:15 crc kubenswrapper[4695]: I1126 13:26:15.127822 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:15 crc kubenswrapper[4695]: [-]has-synced failed: reason withheld Nov 26 13:26:15 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:15 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:15 crc kubenswrapper[4695]: I1126 13:26:15.127893 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:16 crc kubenswrapper[4695]: I1126 13:26:16.128788 4695 patch_prober.go:28] interesting pod/router-default-5444994796-t8w9k container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 26 13:26:16 crc kubenswrapper[4695]: [+]has-synced ok Nov 26 13:26:16 crc kubenswrapper[4695]: [+]process-running ok Nov 26 13:26:16 crc kubenswrapper[4695]: healthz check failed Nov 26 13:26:16 crc kubenswrapper[4695]: I1126 13:26:16.128873 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-t8w9k" podUID="0df7974d-d44f-4c17-b4a4-afcef9078807" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 26 13:26:17 crc kubenswrapper[4695]: I1126 13:26:17.127703 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:26:17 crc kubenswrapper[4695]: I1126 13:26:17.133259 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-t8w9k" Nov 26 13:26:17 crc kubenswrapper[4695]: I1126 13:26:17.686253 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-x97bq" Nov 26 13:26:17 crc kubenswrapper[4695]: I1126 13:26:17.808935 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:26:17 crc kubenswrapper[4695]: I1126 13:26:17.814257 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:26:18 crc kubenswrapper[4695]: I1126 13:26:18.602000 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:26:18 crc kubenswrapper[4695]: I1126 13:26:18.615958 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/755825f0-d565-4a02-8a54-8f9be77991d6-metrics-certs\") pod \"network-metrics-daemon-l9n9h\" (UID: \"755825f0-d565-4a02-8a54-8f9be77991d6\") " pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:26:18 crc kubenswrapper[4695]: I1126 13:26:18.910605 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-l9n9h" Nov 26 13:26:26 crc kubenswrapper[4695]: I1126 13:26:26.453550 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:26:27 crc kubenswrapper[4695]: E1126 13:26:27.392510 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 26 13:26:27 crc kubenswrapper[4695]: E1126 13:26:27.392780 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6j9v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-sbmxm_openshift-marketplace(7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:26:27 crc kubenswrapper[4695]: E1126 13:26:27.393886 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-sbmxm" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" Nov 26 13:26:27 crc kubenswrapper[4695]: E1126 13:26:27.483677 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 26 13:26:27 crc kubenswrapper[4695]: E1126 13:26:27.483827 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr95c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-84jk7_openshift-marketplace(1f6308de-c770-4097-807a-ea8d1fd17151): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:26:27 crc kubenswrapper[4695]: E1126 13:26:27.486793 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-84jk7" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.465106 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-sbmxm" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.465207 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-84jk7" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.527029 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.527365 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2d5ds,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-k87lh_openshift-marketplace(45f264e1-d601-4586-81f1-1ddbb10c5bc1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.529273 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-k87lh" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.602571 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.602937 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gxdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-9tvmf_openshift-marketplace(838cffa7-c983-4531-9b48-8397076df516): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 26 13:26:28 crc kubenswrapper[4695]: E1126 13:26:28.604332 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-9tvmf" podUID="838cffa7-c983-4531-9b48-8397076df516" Nov 26 13:26:28 crc kubenswrapper[4695]: I1126 13:26:28.935660 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-l9n9h"] Nov 26 13:26:29 crc kubenswrapper[4695]: I1126 13:26:29.018655 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 26 13:26:31 crc kubenswrapper[4695]: E1126 13:26:31.781619 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-9tvmf" podUID="838cffa7-c983-4531-9b48-8397076df516" Nov 26 13:26:31 crc kubenswrapper[4695]: E1126 13:26:31.781901 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-k87lh" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" Nov 26 13:26:31 crc kubenswrapper[4695]: W1126 13:26:31.798016 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddf1c7c05_23e1_4092_b6dc_9832a392eca6.slice/crio-ec7bc7ebec1f529af7ac928d30ae50c5f7d5ede6b8142b0f173ba9ca4b0cd07c WatchSource:0}: Error finding container ec7bc7ebec1f529af7ac928d30ae50c5f7d5ede6b8142b0f173ba9ca4b0cd07c: Status 404 returned error can't find the container with id ec7bc7ebec1f529af7ac928d30ae50c5f7d5ede6b8142b0f173ba9ca4b0cd07c Nov 26 13:26:31 crc kubenswrapper[4695]: I1126 13:26:31.817996 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"df1c7c05-23e1-4092-b6dc-9832a392eca6","Type":"ContainerStarted","Data":"ec7bc7ebec1f529af7ac928d30ae50c5f7d5ede6b8142b0f173ba9ca4b0cd07c"} Nov 26 13:26:31 crc kubenswrapper[4695]: I1126 13:26:31.819467 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" event={"ID":"755825f0-d565-4a02-8a54-8f9be77991d6","Type":"ContainerStarted","Data":"60d36666f0667a8e01c02eafd14694a77aaaba403b930fbb3f0f9f99cc3b7619"} Nov 26 13:26:32 crc kubenswrapper[4695]: I1126 13:26:32.833648 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqtp" event={"ID":"53142449-ccfa-4cee-a77e-c1a4f9178691","Type":"ContainerStarted","Data":"78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b"} Nov 26 13:26:33 crc kubenswrapper[4695]: I1126 13:26:33.846496 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" event={"ID":"755825f0-d565-4a02-8a54-8f9be77991d6","Type":"ContainerStarted","Data":"f454cc7c847b2b374336b1379a49d821bc0185283784e7072c32fb1a2714fc01"} Nov 26 13:26:33 crc kubenswrapper[4695]: I1126 13:26:33.850173 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"df1c7c05-23e1-4092-b6dc-9832a392eca6","Type":"ContainerStarted","Data":"ac5db8bfdc1462c52269bad18af3a3cae7587773a9f3277048c58ceb606be256"} Nov 26 13:26:33 crc kubenswrapper[4695]: I1126 13:26:33.856466 4695 generic.go:334] "Generic (PLEG): container finished" podID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerID="78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b" exitCode=0 Nov 26 13:26:33 crc kubenswrapper[4695]: I1126 13:26:33.856526 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqtp" event={"ID":"53142449-ccfa-4cee-a77e-c1a4f9178691","Type":"ContainerDied","Data":"78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b"} Nov 26 13:26:33 crc kubenswrapper[4695]: I1126 13:26:33.874712 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=22.874686892 podStartE2EDuration="22.874686892s" podCreationTimestamp="2025-11-26 13:26:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:33.869876447 +0000 UTC m=+177.505701569" watchObservedRunningTime="2025-11-26 13:26:33.874686892 +0000 UTC m=+177.510512014" Nov 26 13:26:34 crc kubenswrapper[4695]: I1126 13:26:34.865260 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-l9n9h" event={"ID":"755825f0-d565-4a02-8a54-8f9be77991d6","Type":"ContainerStarted","Data":"6a97075d81b79b1751b0ea8ea223571e551a3267e231cbc7ad7b3a66c0764e0f"} Nov 26 13:26:34 crc kubenswrapper[4695]: I1126 13:26:34.885501 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-l9n9h" podStartSLOduration=158.885483393 podStartE2EDuration="2m38.885483393s" podCreationTimestamp="2025-11-26 13:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:26:34.881733413 +0000 UTC m=+178.517558515" watchObservedRunningTime="2025-11-26 13:26:34.885483393 +0000 UTC m=+178.521308485" Nov 26 13:26:36 crc kubenswrapper[4695]: I1126 13:26:36.397092 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:26:36 crc kubenswrapper[4695]: I1126 13:26:36.398151 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:26:38 crc kubenswrapper[4695]: I1126 13:26:38.507430 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-qckcf" Nov 26 13:26:39 crc kubenswrapper[4695]: I1126 13:26:39.916621 4695 generic.go:334] "Generic (PLEG): container finished" podID="df1c7c05-23e1-4092-b6dc-9832a392eca6" containerID="ac5db8bfdc1462c52269bad18af3a3cae7587773a9f3277048c58ceb606be256" exitCode=0 Nov 26 13:26:39 crc kubenswrapper[4695]: I1126 13:26:39.917017 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"df1c7c05-23e1-4092-b6dc-9832a392eca6","Type":"ContainerDied","Data":"ac5db8bfdc1462c52269bad18af3a3cae7587773a9f3277048c58ceb606be256"} Nov 26 13:26:39 crc kubenswrapper[4695]: I1126 13:26:39.922086 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82p2l" event={"ID":"d7b38b12-b4e5-43d3-b7b7-b57169d06241","Type":"ContainerStarted","Data":"0520d5ca2f03a53f50ae97615350008eafff95bce3cf8b22af3734bc5c91a2fc"} Nov 26 13:26:39 crc kubenswrapper[4695]: I1126 13:26:39.929291 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9p6q" event={"ID":"4ede3477-0b5e-43ba-a074-244304777695","Type":"ContainerStarted","Data":"7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf"} Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.938272 4695 generic.go:334] "Generic (PLEG): container finished" podID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerID="0520d5ca2f03a53f50ae97615350008eafff95bce3cf8b22af3734bc5c91a2fc" exitCode=0 Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.938401 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82p2l" event={"ID":"d7b38b12-b4e5-43d3-b7b7-b57169d06241","Type":"ContainerDied","Data":"0520d5ca2f03a53f50ae97615350008eafff95bce3cf8b22af3734bc5c91a2fc"} Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.941951 4695 generic.go:334] "Generic (PLEG): container finished" podID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerID="b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441" exitCode=0 Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.942003 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmjm" event={"ID":"46e8211f-e429-46c3-9dbf-2dd2d6490dfd","Type":"ContainerDied","Data":"b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441"} Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.945713 4695 generic.go:334] "Generic (PLEG): container finished" podID="4ede3477-0b5e-43ba-a074-244304777695" containerID="7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf" exitCode=0 Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.945784 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9p6q" event={"ID":"4ede3477-0b5e-43ba-a074-244304777695","Type":"ContainerDied","Data":"7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf"} Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.951429 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqtp" event={"ID":"53142449-ccfa-4cee-a77e-c1a4f9178691","Type":"ContainerStarted","Data":"d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25"} Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.960833 4695 generic.go:334] "Generic (PLEG): container finished" podID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerID="a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f" exitCode=0 Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.960906 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbmxm" event={"ID":"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb","Type":"ContainerDied","Data":"a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f"} Nov 26 13:26:40 crc kubenswrapper[4695]: I1126 13:26:40.980661 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lkqtp" podStartSLOduration=3.288712509 podStartE2EDuration="35.980595343s" podCreationTimestamp="2025-11-26 13:26:05 +0000 UTC" firstStartedPulling="2025-11-26 13:26:07.549981685 +0000 UTC m=+151.185806767" lastFinishedPulling="2025-11-26 13:26:40.241864519 +0000 UTC m=+183.877689601" observedRunningTime="2025-11-26 13:26:40.978663331 +0000 UTC m=+184.614488433" watchObservedRunningTime="2025-11-26 13:26:40.980595343 +0000 UTC m=+184.616420425" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.317189 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.424974 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df1c7c05-23e1-4092-b6dc-9832a392eca6-kubelet-dir\") pod \"df1c7c05-23e1-4092-b6dc-9832a392eca6\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.425041 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/df1c7c05-23e1-4092-b6dc-9832a392eca6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "df1c7c05-23e1-4092-b6dc-9832a392eca6" (UID: "df1c7c05-23e1-4092-b6dc-9832a392eca6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.425104 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df1c7c05-23e1-4092-b6dc-9832a392eca6-kube-api-access\") pod \"df1c7c05-23e1-4092-b6dc-9832a392eca6\" (UID: \"df1c7c05-23e1-4092-b6dc-9832a392eca6\") " Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.425446 4695 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/df1c7c05-23e1-4092-b6dc-9832a392eca6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.432079 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df1c7c05-23e1-4092-b6dc-9832a392eca6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "df1c7c05-23e1-4092-b6dc-9832a392eca6" (UID: "df1c7c05-23e1-4092-b6dc-9832a392eca6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.529129 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/df1c7c05-23e1-4092-b6dc-9832a392eca6-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.968621 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"df1c7c05-23e1-4092-b6dc-9832a392eca6","Type":"ContainerDied","Data":"ec7bc7ebec1f529af7ac928d30ae50c5f7d5ede6b8142b0f173ba9ca4b0cd07c"} Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.968844 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7bc7ebec1f529af7ac928d30ae50c5f7d5ede6b8142b0f173ba9ca4b0cd07c" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.968674 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.972552 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbmxm" event={"ID":"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb","Type":"ContainerStarted","Data":"b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6"} Nov 26 13:26:41 crc kubenswrapper[4695]: I1126 13:26:41.974902 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82p2l" event={"ID":"d7b38b12-b4e5-43d3-b7b7-b57169d06241","Type":"ContainerStarted","Data":"2ba1ed52dc24ff22428ddf7dc9bd1b404a3423408a1dc4296dee2bd8d1074a9c"} Nov 26 13:26:42 crc kubenswrapper[4695]: I1126 13:26:42.018406 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sbmxm" podStartSLOduration=1.846934437 podStartE2EDuration="35.018383442s" podCreationTimestamp="2025-11-26 13:26:07 +0000 UTC" firstStartedPulling="2025-11-26 13:26:08.562014916 +0000 UTC m=+152.197839998" lastFinishedPulling="2025-11-26 13:26:41.733463921 +0000 UTC m=+185.369289003" observedRunningTime="2025-11-26 13:26:41.9962355 +0000 UTC m=+185.632060582" watchObservedRunningTime="2025-11-26 13:26:42.018383442 +0000 UTC m=+185.654208524" Nov 26 13:26:42 crc kubenswrapper[4695]: I1126 13:26:42.018532 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-82p2l" podStartSLOduration=2.130245856 podStartE2EDuration="33.018527067s" podCreationTimestamp="2025-11-26 13:26:09 +0000 UTC" firstStartedPulling="2025-11-26 13:26:10.625576768 +0000 UTC m=+154.261401850" lastFinishedPulling="2025-11-26 13:26:41.513857979 +0000 UTC m=+185.149683061" observedRunningTime="2025-11-26 13:26:42.015845361 +0000 UTC m=+185.651670453" watchObservedRunningTime="2025-11-26 13:26:42.018527067 +0000 UTC m=+185.654352149" Nov 26 13:26:42 crc kubenswrapper[4695]: I1126 13:26:42.982585 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmjm" event={"ID":"46e8211f-e429-46c3-9dbf-2dd2d6490dfd","Type":"ContainerStarted","Data":"9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786"} Nov 26 13:26:42 crc kubenswrapper[4695]: I1126 13:26:42.985586 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9p6q" event={"ID":"4ede3477-0b5e-43ba-a074-244304777695","Type":"ContainerStarted","Data":"2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3"} Nov 26 13:26:43 crc kubenswrapper[4695]: I1126 13:26:43.028907 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9p6q" podStartSLOduration=3.832097663 podStartE2EDuration="35.028888034s" podCreationTimestamp="2025-11-26 13:26:08 +0000 UTC" firstStartedPulling="2025-11-26 13:26:10.670054728 +0000 UTC m=+154.305879810" lastFinishedPulling="2025-11-26 13:26:41.866845089 +0000 UTC m=+185.502670181" observedRunningTime="2025-11-26 13:26:43.025888737 +0000 UTC m=+186.661713819" watchObservedRunningTime="2025-11-26 13:26:43.028888034 +0000 UTC m=+186.664713116" Nov 26 13:26:43 crc kubenswrapper[4695]: I1126 13:26:43.030748 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hdmjm" podStartSLOduration=3.6591036260000003 podStartE2EDuration="36.030738583s" podCreationTimestamp="2025-11-26 13:26:07 +0000 UTC" firstStartedPulling="2025-11-26 13:26:09.604664972 +0000 UTC m=+153.240490054" lastFinishedPulling="2025-11-26 13:26:41.976299929 +0000 UTC m=+185.612125011" observedRunningTime="2025-11-26 13:26:43.006788563 +0000 UTC m=+186.642613635" watchObservedRunningTime="2025-11-26 13:26:43.030738583 +0000 UTC m=+186.666563665" Nov 26 13:26:44 crc kubenswrapper[4695]: I1126 13:26:44.004050 4695 generic.go:334] "Generic (PLEG): container finished" podID="1f6308de-c770-4097-807a-ea8d1fd17151" containerID="f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9" exitCode=0 Nov 26 13:26:44 crc kubenswrapper[4695]: I1126 13:26:44.004263 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84jk7" event={"ID":"1f6308de-c770-4097-807a-ea8d1fd17151","Type":"ContainerDied","Data":"f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9"} Nov 26 13:26:45 crc kubenswrapper[4695]: I1126 13:26:45.011664 4695 generic.go:334] "Generic (PLEG): container finished" podID="838cffa7-c983-4531-9b48-8397076df516" containerID="7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df" exitCode=0 Nov 26 13:26:45 crc kubenswrapper[4695]: I1126 13:26:45.011702 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tvmf" event={"ID":"838cffa7-c983-4531-9b48-8397076df516","Type":"ContainerDied","Data":"7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df"} Nov 26 13:26:45 crc kubenswrapper[4695]: I1126 13:26:45.407028 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 26 13:26:46 crc kubenswrapper[4695]: I1126 13:26:46.017672 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84jk7" event={"ID":"1f6308de-c770-4097-807a-ea8d1fd17151","Type":"ContainerStarted","Data":"35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9"} Nov 26 13:26:46 crc kubenswrapper[4695]: I1126 13:26:46.019602 4695 generic.go:334] "Generic (PLEG): container finished" podID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerID="540c50bf86424fa9ffcbf3f19a98453a50759c5a6e7df55aa8ef4a3e6e0245f7" exitCode=0 Nov 26 13:26:46 crc kubenswrapper[4695]: I1126 13:26:46.019650 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k87lh" event={"ID":"45f264e1-d601-4586-81f1-1ddbb10c5bc1","Type":"ContainerDied","Data":"540c50bf86424fa9ffcbf3f19a98453a50759c5a6e7df55aa8ef4a3e6e0245f7"} Nov 26 13:26:46 crc kubenswrapper[4695]: I1126 13:26:46.037294 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84jk7" podStartSLOduration=3.540895167 podStartE2EDuration="41.037277255s" podCreationTimestamp="2025-11-26 13:26:05 +0000 UTC" firstStartedPulling="2025-11-26 13:26:07.536720699 +0000 UTC m=+151.172545781" lastFinishedPulling="2025-11-26 13:26:45.033102797 +0000 UTC m=+188.668927869" observedRunningTime="2025-11-26 13:26:46.034695232 +0000 UTC m=+189.670520324" watchObservedRunningTime="2025-11-26 13:26:46.037277255 +0000 UTC m=+189.673102337" Nov 26 13:26:46 crc kubenswrapper[4695]: I1126 13:26:46.272263 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:46 crc kubenswrapper[4695]: I1126 13:26:46.272604 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:46 crc kubenswrapper[4695]: I1126 13:26:46.716542 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:47 crc kubenswrapper[4695]: I1126 13:26:47.075142 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:47 crc kubenswrapper[4695]: I1126 13:26:47.900109 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:47 crc kubenswrapper[4695]: I1126 13:26:47.900754 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:47 crc kubenswrapper[4695]: I1126 13:26:47.952155 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.040358 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tvmf" event={"ID":"838cffa7-c983-4531-9b48-8397076df516","Type":"ContainerStarted","Data":"143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40"} Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.043378 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k87lh" event={"ID":"45f264e1-d601-4586-81f1-1ddbb10c5bc1","Type":"ContainerStarted","Data":"ec908bb7d965e627b29ca3eab618e66f9d9bbef2c271b341743681f221432e6c"} Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.058272 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9tvmf" podStartSLOduration=3.717392331 podStartE2EDuration="43.058254878s" podCreationTimestamp="2025-11-26 13:26:05 +0000 UTC" firstStartedPulling="2025-11-26 13:26:07.551509804 +0000 UTC m=+151.187334886" lastFinishedPulling="2025-11-26 13:26:46.892372351 +0000 UTC m=+190.528197433" observedRunningTime="2025-11-26 13:26:48.056711589 +0000 UTC m=+191.692536671" watchObservedRunningTime="2025-11-26 13:26:48.058254878 +0000 UTC m=+191.694079960" Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.078983 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-k87lh" podStartSLOduration=1.8972509610000001 podStartE2EDuration="42.078966764s" podCreationTimestamp="2025-11-26 13:26:06 +0000 UTC" firstStartedPulling="2025-11-26 13:26:07.528207245 +0000 UTC m=+151.164032327" lastFinishedPulling="2025-11-26 13:26:47.709923048 +0000 UTC m=+191.345748130" observedRunningTime="2025-11-26 13:26:48.07757028 +0000 UTC m=+191.713395392" watchObservedRunningTime="2025-11-26 13:26:48.078966764 +0000 UTC m=+191.714791846" Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.081766 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.278829 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.278902 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.326103 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:48 crc kubenswrapper[4695]: I1126 13:26:48.350492 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkqtp"] Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.049761 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lkqtp" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="registry-server" containerID="cri-o://d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25" gracePeriod=2 Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.082000 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.082451 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.131627 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.139430 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.490649 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.490959 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.532286 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.553615 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.641615 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-catalog-content\") pod \"53142449-ccfa-4cee-a77e-c1a4f9178691\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.641690 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlhnf\" (UniqueName: \"kubernetes.io/projected/53142449-ccfa-4cee-a77e-c1a4f9178691-kube-api-access-hlhnf\") pod \"53142449-ccfa-4cee-a77e-c1a4f9178691\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.641757 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-utilities\") pod \"53142449-ccfa-4cee-a77e-c1a4f9178691\" (UID: \"53142449-ccfa-4cee-a77e-c1a4f9178691\") " Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.642682 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-utilities" (OuterVolumeSpecName: "utilities") pod "53142449-ccfa-4cee-a77e-c1a4f9178691" (UID: "53142449-ccfa-4cee-a77e-c1a4f9178691"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.648690 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53142449-ccfa-4cee-a77e-c1a4f9178691-kube-api-access-hlhnf" (OuterVolumeSpecName: "kube-api-access-hlhnf") pod "53142449-ccfa-4cee-a77e-c1a4f9178691" (UID: "53142449-ccfa-4cee-a77e-c1a4f9178691"). InnerVolumeSpecName "kube-api-access-hlhnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.693649 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53142449-ccfa-4cee-a77e-c1a4f9178691" (UID: "53142449-ccfa-4cee-a77e-c1a4f9178691"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.743456 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.743494 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlhnf\" (UniqueName: \"kubernetes.io/projected/53142449-ccfa-4cee-a77e-c1a4f9178691-kube-api-access-hlhnf\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:49 crc kubenswrapper[4695]: I1126 13:26:49.743508 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53142449-ccfa-4cee-a77e-c1a4f9178691-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.062393 4695 generic.go:334] "Generic (PLEG): container finished" podID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerID="d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25" exitCode=0 Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.062597 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqtp" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.062602 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqtp" event={"ID":"53142449-ccfa-4cee-a77e-c1a4f9178691","Type":"ContainerDied","Data":"d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25"} Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.062661 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqtp" event={"ID":"53142449-ccfa-4cee-a77e-c1a4f9178691","Type":"ContainerDied","Data":"4553011527b7f6ebcc83d8f9c6128e43d0247bc0916b0fa140f6605e2e480834"} Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.062680 4695 scope.go:117] "RemoveContainer" containerID="d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.096022 4695 scope.go:117] "RemoveContainer" containerID="78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.120147 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkqtp"] Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.127073 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lkqtp"] Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.128924 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.129445 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.137561 4695 scope.go:117] "RemoveContainer" containerID="a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.169126 4695 scope.go:117] "RemoveContainer" containerID="d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25" Nov 26 13:26:50 crc kubenswrapper[4695]: E1126 13:26:50.170751 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25\": container with ID starting with d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25 not found: ID does not exist" containerID="d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.170795 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25"} err="failed to get container status \"d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25\": rpc error: code = NotFound desc = could not find container \"d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25\": container with ID starting with d4a5849e7aa514a39c883f4736b8508698623521937e5e1495aa96b05140da25 not found: ID does not exist" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.170862 4695 scope.go:117] "RemoveContainer" containerID="78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b" Nov 26 13:26:50 crc kubenswrapper[4695]: E1126 13:26:50.171319 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b\": container with ID starting with 78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b not found: ID does not exist" containerID="78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.171425 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b"} err="failed to get container status \"78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b\": rpc error: code = NotFound desc = could not find container \"78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b\": container with ID starting with 78b3a0e0e59bccee8f8a4e8c8b96db4b3fa500af3990fe30fdc75ca1b872f13b not found: ID does not exist" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.171467 4695 scope.go:117] "RemoveContainer" containerID="a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676" Nov 26 13:26:50 crc kubenswrapper[4695]: E1126 13:26:50.172590 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676\": container with ID starting with a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676 not found: ID does not exist" containerID="a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.172631 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676"} err="failed to get container status \"a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676\": rpc error: code = NotFound desc = could not find container \"a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676\": container with ID starting with a736528860381eaab2ab7db6f614ba4e297b7d563eb4621a69ec4fcaf63f0676 not found: ID does not exist" Nov 26 13:26:50 crc kubenswrapper[4695]: I1126 13:26:50.745993 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmjm"] Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.069088 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hdmjm" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="registry-server" containerID="cri-o://9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786" gracePeriod=2 Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.172208 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" path="/var/lib/kubelet/pods/53142449-ccfa-4cee-a77e-c1a4f9178691/volumes" Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.567792 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.676961 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-catalog-content\") pod \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.677071 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-utilities\") pod \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.677099 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7kdd\" (UniqueName: \"kubernetes.io/projected/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-kube-api-access-t7kdd\") pod \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\" (UID: \"46e8211f-e429-46c3-9dbf-2dd2d6490dfd\") " Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.678935 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-utilities" (OuterVolumeSpecName: "utilities") pod "46e8211f-e429-46c3-9dbf-2dd2d6490dfd" (UID: "46e8211f-e429-46c3-9dbf-2dd2d6490dfd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.685945 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-kube-api-access-t7kdd" (OuterVolumeSpecName: "kube-api-access-t7kdd") pod "46e8211f-e429-46c3-9dbf-2dd2d6490dfd" (UID: "46e8211f-e429-46c3-9dbf-2dd2d6490dfd"). InnerVolumeSpecName "kube-api-access-t7kdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.694175 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46e8211f-e429-46c3-9dbf-2dd2d6490dfd" (UID: "46e8211f-e429-46c3-9dbf-2dd2d6490dfd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.777921 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7kdd\" (UniqueName: \"kubernetes.io/projected/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-kube-api-access-t7kdd\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.778217 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:51 crc kubenswrapper[4695]: I1126 13:26:51.778230 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46e8211f-e429-46c3-9dbf-2dd2d6490dfd-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.076849 4695 generic.go:334] "Generic (PLEG): container finished" podID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerID="9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786" exitCode=0 Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.077716 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hdmjm" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.088970 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmjm" event={"ID":"46e8211f-e429-46c3-9dbf-2dd2d6490dfd","Type":"ContainerDied","Data":"9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786"} Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.089086 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hdmjm" event={"ID":"46e8211f-e429-46c3-9dbf-2dd2d6490dfd","Type":"ContainerDied","Data":"13fd09a5a6c40b498a7cfb54eedab948541b3b880305eb71bfcf7e768a7bd98b"} Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.089122 4695 scope.go:117] "RemoveContainer" containerID="9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.112900 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmjm"] Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.114154 4695 scope.go:117] "RemoveContainer" containerID="b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.116188 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hdmjm"] Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.129456 4695 scope.go:117] "RemoveContainer" containerID="05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.148765 4695 scope.go:117] "RemoveContainer" containerID="9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.149197 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786\": container with ID starting with 9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786 not found: ID does not exist" containerID="9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.150402 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786"} err="failed to get container status \"9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786\": rpc error: code = NotFound desc = could not find container \"9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786\": container with ID starting with 9fafd3acda049e3067939263c99a7d105ac9cf247bdb3b2f0c6e3c3d1e484786 not found: ID does not exist" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.150443 4695 scope.go:117] "RemoveContainer" containerID="b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.150870 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441\": container with ID starting with b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441 not found: ID does not exist" containerID="b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.150905 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441"} err="failed to get container status \"b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441\": rpc error: code = NotFound desc = could not find container \"b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441\": container with ID starting with b15f9bf64f4b48a77b8b334a039f6edd536091541e6d9d24591ce13bbb990441 not found: ID does not exist" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.150930 4695 scope.go:117] "RemoveContainer" containerID="05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.151172 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97\": container with ID starting with 05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97 not found: ID does not exist" containerID="05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.151202 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97"} err="failed to get container status \"05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97\": rpc error: code = NotFound desc = could not find container \"05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97\": container with ID starting with 05ee4cd91aa16e3651dc3e2dc2687b5900346d13463344919f8b50ba76ec8d97 not found: ID does not exist" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.978897 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.979179 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="extract-utilities" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.979195 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="extract-utilities" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.979213 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="extract-content" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.979221 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="extract-content" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.979234 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df1c7c05-23e1-4092-b6dc-9832a392eca6" containerName="pruner" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.979243 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="df1c7c05-23e1-4092-b6dc-9832a392eca6" containerName="pruner" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.979260 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="extract-utilities" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.979267 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="extract-utilities" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.979279 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="extract-content" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.979284 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="extract-content" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.979296 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="registry-server" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.979301 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="registry-server" Nov 26 13:26:52 crc kubenswrapper[4695]: E1126 13:26:52.979308 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="registry-server" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.979313 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="registry-server" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.980891 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="53142449-ccfa-4cee-a77e-c1a4f9178691" containerName="registry-server" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.980911 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="df1c7c05-23e1-4092-b6dc-9832a392eca6" containerName="pruner" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.980923 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" containerName="registry-server" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.981288 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.983403 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.983748 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 13:26:52 crc kubenswrapper[4695]: I1126 13:26:52.995436 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.100276 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac14d99-7c2b-4ed5-aecc-4be508224d25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.100399 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac14d99-7c2b-4ed5-aecc-4be508224d25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.145550 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82p2l"] Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.146207 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-82p2l" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="registry-server" containerID="cri-o://2ba1ed52dc24ff22428ddf7dc9bd1b404a3423408a1dc4296dee2bd8d1074a9c" gracePeriod=2 Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.172220 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e8211f-e429-46c3-9dbf-2dd2d6490dfd" path="/var/lib/kubelet/pods/46e8211f-e429-46c3-9dbf-2dd2d6490dfd/volumes" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.201341 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac14d99-7c2b-4ed5-aecc-4be508224d25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.201415 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac14d99-7c2b-4ed5-aecc-4be508224d25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.201484 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac14d99-7c2b-4ed5-aecc-4be508224d25-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.222609 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac14d99-7c2b-4ed5-aecc-4be508224d25-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.313246 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:53 crc kubenswrapper[4695]: I1126 13:26:53.703210 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 26 13:26:53 crc kubenswrapper[4695]: W1126 13:26:53.710278 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbac14d99_7c2b_4ed5_aecc_4be508224d25.slice/crio-79ba438431032fbde97e43dc517c89c756caed96e615b13597ad6b8276d36894 WatchSource:0}: Error finding container 79ba438431032fbde97e43dc517c89c756caed96e615b13597ad6b8276d36894: Status 404 returned error can't find the container with id 79ba438431032fbde97e43dc517c89c756caed96e615b13597ad6b8276d36894 Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.089462 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bac14d99-7c2b-4ed5-aecc-4be508224d25","Type":"ContainerStarted","Data":"79ba438431032fbde97e43dc517c89c756caed96e615b13597ad6b8276d36894"} Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.093131 4695 generic.go:334] "Generic (PLEG): container finished" podID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerID="2ba1ed52dc24ff22428ddf7dc9bd1b404a3423408a1dc4296dee2bd8d1074a9c" exitCode=0 Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.093172 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82p2l" event={"ID":"d7b38b12-b4e5-43d3-b7b7-b57169d06241","Type":"ContainerDied","Data":"2ba1ed52dc24ff22428ddf7dc9bd1b404a3423408a1dc4296dee2bd8d1074a9c"} Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.388077 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.515576 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrdlk\" (UniqueName: \"kubernetes.io/projected/d7b38b12-b4e5-43d3-b7b7-b57169d06241-kube-api-access-wrdlk\") pod \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.515668 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-catalog-content\") pod \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.515806 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-utilities\") pod \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\" (UID: \"d7b38b12-b4e5-43d3-b7b7-b57169d06241\") " Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.516667 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-utilities" (OuterVolumeSpecName: "utilities") pod "d7b38b12-b4e5-43d3-b7b7-b57169d06241" (UID: "d7b38b12-b4e5-43d3-b7b7-b57169d06241"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.516920 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.525801 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7b38b12-b4e5-43d3-b7b7-b57169d06241-kube-api-access-wrdlk" (OuterVolumeSpecName: "kube-api-access-wrdlk") pod "d7b38b12-b4e5-43d3-b7b7-b57169d06241" (UID: "d7b38b12-b4e5-43d3-b7b7-b57169d06241"). InnerVolumeSpecName "kube-api-access-wrdlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.611969 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7b38b12-b4e5-43d3-b7b7-b57169d06241" (UID: "d7b38b12-b4e5-43d3-b7b7-b57169d06241"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.618840 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrdlk\" (UniqueName: \"kubernetes.io/projected/d7b38b12-b4e5-43d3-b7b7-b57169d06241-kube-api-access-wrdlk\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:54 crc kubenswrapper[4695]: I1126 13:26:54.618889 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7b38b12-b4e5-43d3-b7b7-b57169d06241-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.100251 4695 generic.go:334] "Generic (PLEG): container finished" podID="bac14d99-7c2b-4ed5-aecc-4be508224d25" containerID="32c3f1697a27cf65802360b0ed16e398d945888807ae5ccc23e739eb13966e64" exitCode=0 Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.100312 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bac14d99-7c2b-4ed5-aecc-4be508224d25","Type":"ContainerDied","Data":"32c3f1697a27cf65802360b0ed16e398d945888807ae5ccc23e739eb13966e64"} Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.102599 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-82p2l" event={"ID":"d7b38b12-b4e5-43d3-b7b7-b57169d06241","Type":"ContainerDied","Data":"c27b55fcbeb13488be21fb0681f07eca10c20c4a65467a3389c483e9d2bde2e0"} Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.102634 4695 scope.go:117] "RemoveContainer" containerID="2ba1ed52dc24ff22428ddf7dc9bd1b404a3423408a1dc4296dee2bd8d1074a9c" Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.102795 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-82p2l" Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.125807 4695 scope.go:117] "RemoveContainer" containerID="0520d5ca2f03a53f50ae97615350008eafff95bce3cf8b22af3734bc5c91a2fc" Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.147719 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-82p2l"] Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.151691 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-82p2l"] Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.153379 4695 scope.go:117] "RemoveContainer" containerID="72b0ca282ee59346479756338c7170fefd2e549a552b7e4a40fd461cc705319f" Nov 26 13:26:55 crc kubenswrapper[4695]: I1126 13:26:55.169166 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" path="/var/lib/kubelet/pods/d7b38b12-b4e5-43d3-b7b7-b57169d06241/volumes" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:55.999891 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.000206 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.037201 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.075464 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.075518 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.117410 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.153637 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.177276 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.355913 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.442921 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac14d99-7c2b-4ed5-aecc-4be508224d25-kube-api-access\") pod \"bac14d99-7c2b-4ed5-aecc-4be508224d25\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.443059 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac14d99-7c2b-4ed5-aecc-4be508224d25-kubelet-dir\") pod \"bac14d99-7c2b-4ed5-aecc-4be508224d25\" (UID: \"bac14d99-7c2b-4ed5-aecc-4be508224d25\") " Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.443413 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bac14d99-7c2b-4ed5-aecc-4be508224d25-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bac14d99-7c2b-4ed5-aecc-4be508224d25" (UID: "bac14d99-7c2b-4ed5-aecc-4be508224d25"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.443877 4695 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bac14d99-7c2b-4ed5-aecc-4be508224d25-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.457565 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bac14d99-7c2b-4ed5-aecc-4be508224d25-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bac14d99-7c2b-4ed5-aecc-4be508224d25" (UID: "bac14d99-7c2b-4ed5-aecc-4be508224d25"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.545282 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bac14d99-7c2b-4ed5-aecc-4be508224d25-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.608722 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.608797 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:56 crc kubenswrapper[4695]: I1126 13:26:56.659625 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:57 crc kubenswrapper[4695]: I1126 13:26:57.114483 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 26 13:26:57 crc kubenswrapper[4695]: I1126 13:26:57.114414 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"bac14d99-7c2b-4ed5-aecc-4be508224d25","Type":"ContainerDied","Data":"79ba438431032fbde97e43dc517c89c756caed96e615b13597ad6b8276d36894"} Nov 26 13:26:57 crc kubenswrapper[4695]: I1126 13:26:57.114549 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ba438431032fbde97e43dc517c89c756caed96e615b13597ad6b8276d36894" Nov 26 13:26:57 crc kubenswrapper[4695]: I1126 13:26:57.151224 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:26:59 crc kubenswrapper[4695]: I1126 13:26:59.749442 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k87lh"] Nov 26 13:26:59 crc kubenswrapper[4695]: I1126 13:26:59.750042 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-k87lh" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="registry-server" containerID="cri-o://ec908bb7d965e627b29ca3eab618e66f9d9bbef2c271b341743681f221432e6c" gracePeriod=2 Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.132683 4695 generic.go:334] "Generic (PLEG): container finished" podID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerID="ec908bb7d965e627b29ca3eab618e66f9d9bbef2c271b341743681f221432e6c" exitCode=0 Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.132731 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k87lh" event={"ID":"45f264e1-d601-4586-81f1-1ddbb10c5bc1","Type":"ContainerDied","Data":"ec908bb7d965e627b29ca3eab618e66f9d9bbef2c271b341743681f221432e6c"} Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.576822 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 13:27:00 crc kubenswrapper[4695]: E1126 13:27:00.577075 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bac14d99-7c2b-4ed5-aecc-4be508224d25" containerName="pruner" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.577090 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="bac14d99-7c2b-4ed5-aecc-4be508224d25" containerName="pruner" Nov 26 13:27:00 crc kubenswrapper[4695]: E1126 13:27:00.577101 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="registry-server" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.577109 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="registry-server" Nov 26 13:27:00 crc kubenswrapper[4695]: E1126 13:27:00.577124 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="extract-utilities" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.577132 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="extract-utilities" Nov 26 13:27:00 crc kubenswrapper[4695]: E1126 13:27:00.577143 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="extract-content" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.577149 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="extract-content" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.577258 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7b38b12-b4e5-43d3-b7b7-b57169d06241" containerName="registry-server" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.577272 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="bac14d99-7c2b-4ed5-aecc-4be508224d25" containerName="pruner" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.577708 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.581621 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.582442 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.589294 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.694016 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e14ba187-7938-43e0-9dcd-89167e38546c-kube-api-access\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.694063 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-var-lock\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.694119 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.730405 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.794715 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-catalog-content\") pod \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.794827 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d5ds\" (UniqueName: \"kubernetes.io/projected/45f264e1-d601-4586-81f1-1ddbb10c5bc1-kube-api-access-2d5ds\") pod \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.794851 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-utilities\") pod \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\" (UID: \"45f264e1-d601-4586-81f1-1ddbb10c5bc1\") " Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.794998 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.795046 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e14ba187-7938-43e0-9dcd-89167e38546c-kube-api-access\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.795078 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-var-lock\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.795142 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-var-lock\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.796051 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-kubelet-dir\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.796365 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-utilities" (OuterVolumeSpecName: "utilities") pod "45f264e1-d601-4586-81f1-1ddbb10c5bc1" (UID: "45f264e1-d601-4586-81f1-1ddbb10c5bc1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.800624 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f264e1-d601-4586-81f1-1ddbb10c5bc1-kube-api-access-2d5ds" (OuterVolumeSpecName: "kube-api-access-2d5ds") pod "45f264e1-d601-4586-81f1-1ddbb10c5bc1" (UID: "45f264e1-d601-4586-81f1-1ddbb10c5bc1"). InnerVolumeSpecName "kube-api-access-2d5ds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.821168 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e14ba187-7938-43e0-9dcd-89167e38546c-kube-api-access\") pod \"installer-9-crc\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.840412 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45f264e1-d601-4586-81f1-1ddbb10c5bc1" (UID: "45f264e1-d601-4586-81f1-1ddbb10c5bc1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.896555 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d5ds\" (UniqueName: \"kubernetes.io/projected/45f264e1-d601-4586-81f1-1ddbb10c5bc1-kube-api-access-2d5ds\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.896601 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.896615 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f264e1-d601-4586-81f1-1ddbb10c5bc1-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:00 crc kubenswrapper[4695]: I1126 13:27:00.900368 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.146107 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-k87lh" event={"ID":"45f264e1-d601-4586-81f1-1ddbb10c5bc1","Type":"ContainerDied","Data":"52a4db40035f53525e12711e827d628ba12c967c0d68b63d5de446bc91bad273"} Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.146523 4695 scope.go:117] "RemoveContainer" containerID="ec908bb7d965e627b29ca3eab618e66f9d9bbef2c271b341743681f221432e6c" Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.146189 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-k87lh" Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.165997 4695 scope.go:117] "RemoveContainer" containerID="540c50bf86424fa9ffcbf3f19a98453a50759c5a6e7df55aa8ef4a3e6e0245f7" Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.181437 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-k87lh"] Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.183419 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-k87lh"] Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.207813 4695 scope.go:117] "RemoveContainer" containerID="02d318d4744b2263e64b81b9693d85e5fbd57c4a1a865ddd1e21733ad766ef77" Nov 26 13:27:01 crc kubenswrapper[4695]: I1126 13:27:01.304634 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 26 13:27:01 crc kubenswrapper[4695]: W1126 13:27:01.315243 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode14ba187_7938_43e0_9dcd_89167e38546c.slice/crio-d277d1bf4a07a71e42fef8d634c7616bc1e8b664ba707a22c3e8dffb43dfe7a1 WatchSource:0}: Error finding container d277d1bf4a07a71e42fef8d634c7616bc1e8b664ba707a22c3e8dffb43dfe7a1: Status 404 returned error can't find the container with id d277d1bf4a07a71e42fef8d634c7616bc1e8b664ba707a22c3e8dffb43dfe7a1 Nov 26 13:27:02 crc kubenswrapper[4695]: I1126 13:27:02.154463 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e14ba187-7938-43e0-9dcd-89167e38546c","Type":"ContainerStarted","Data":"e328f7e734d1ced71ddf3f241600971d26712729434002bb91b4c38385cad340"} Nov 26 13:27:02 crc kubenswrapper[4695]: I1126 13:27:02.154826 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e14ba187-7938-43e0-9dcd-89167e38546c","Type":"ContainerStarted","Data":"d277d1bf4a07a71e42fef8d634c7616bc1e8b664ba707a22c3e8dffb43dfe7a1"} Nov 26 13:27:02 crc kubenswrapper[4695]: I1126 13:27:02.173487 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.173461664 podStartE2EDuration="2.173461664s" podCreationTimestamp="2025-11-26 13:27:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:27:02.172200313 +0000 UTC m=+205.808025435" watchObservedRunningTime="2025-11-26 13:27:02.173461664 +0000 UTC m=+205.809286786" Nov 26 13:27:03 crc kubenswrapper[4695]: I1126 13:27:03.173627 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" path="/var/lib/kubelet/pods/45f264e1-d601-4586-81f1-1ddbb10c5bc1/volumes" Nov 26 13:27:06 crc kubenswrapper[4695]: I1126 13:27:06.396875 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:27:06 crc kubenswrapper[4695]: I1126 13:27:06.397374 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:27:06 crc kubenswrapper[4695]: I1126 13:27:06.397441 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:27:06 crc kubenswrapper[4695]: I1126 13:27:06.398145 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:27:06 crc kubenswrapper[4695]: I1126 13:27:06.398220 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24" gracePeriod=600 Nov 26 13:27:07 crc kubenswrapper[4695]: I1126 13:27:07.187814 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24" exitCode=0 Nov 26 13:27:07 crc kubenswrapper[4695]: I1126 13:27:07.188157 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24"} Nov 26 13:27:07 crc kubenswrapper[4695]: I1126 13:27:07.188178 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"622882533bb496b4f2a02e17a6efe426030229a91b2dd1f8012fea475a41c1c7"} Nov 26 13:27:17 crc kubenswrapper[4695]: I1126 13:27:17.382948 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-mxj48"] Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.271816 4695 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.273154 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a" gracePeriod=15 Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.273180 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441" gracePeriod=15 Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.273202 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540" gracePeriod=15 Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.273236 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921" gracePeriod=15 Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.273191 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462" gracePeriod=15 Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.273735 4695 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274032 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="extract-utilities" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274057 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="extract-utilities" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274083 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274099 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274118 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274135 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274152 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274164 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274184 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274196 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274211 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274223 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274243 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="extract-content" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274255 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="extract-content" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274281 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274295 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274309 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="registry-server" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274323 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="registry-server" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274523 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274542 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274559 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274577 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274591 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f264e1-d601-4586-81f1-1ddbb10c5bc1" containerName="registry-server" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274607 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.274799 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274814 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.274974 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.276869 4695 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.278070 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.283582 4695 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.305235 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.479337 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.479683 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.479726 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.479758 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.479780 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.479934 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.479980 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.480009 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581439 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581533 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581567 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581594 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581628 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581675 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581697 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581737 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581760 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581593 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581716 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581868 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581924 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581562 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581708 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.581710 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: I1126 13:27:39.603017 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:27:39 crc kubenswrapper[4695]: W1126 13:27:39.648458 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-26bf087426e180238b15cee07cf7d0e6193ad22066d2cce6d1624b6248deb72a WatchSource:0}: Error finding container 26bf087426e180238b15cee07cf7d0e6193ad22066d2cce6d1624b6248deb72a: Status 404 returned error can't find the container with id 26bf087426e180238b15cee07cf7d0e6193ad22066d2cce6d1624b6248deb72a Nov 26 13:27:39 crc kubenswrapper[4695]: E1126 13:27:39.655308 4695 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b9180632460fb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 13:27:39.650851067 +0000 UTC m=+243.286676139,LastTimestamp:2025-11-26 13:27:39.650851067 +0000 UTC m=+243.286676139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.401499 4695 generic.go:334] "Generic (PLEG): container finished" podID="e14ba187-7938-43e0-9dcd-89167e38546c" containerID="e328f7e734d1ced71ddf3f241600971d26712729434002bb91b4c38385cad340" exitCode=0 Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.401572 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e14ba187-7938-43e0-9dcd-89167e38546c","Type":"ContainerDied","Data":"e328f7e734d1ced71ddf3f241600971d26712729434002bb91b4c38385cad340"} Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.403170 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.404770 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.405532 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.406248 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.407687 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441" exitCode=0 Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.407769 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921" exitCode=0 Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.407792 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462" exitCode=0 Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.407811 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540" exitCode=2 Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.407824 4695 scope.go:117] "RemoveContainer" containerID="208bc03fc3cdcc99f6525f4aabeb690be937d33f9fc7951a93a77d206cbd7845" Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.411320 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"cea323c1dc0930e89aab60e24045c64ea25d050f5ce56bd03b3fb0de0ef3a96b"} Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.411413 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"26bf087426e180238b15cee07cf7d0e6193ad22066d2cce6d1624b6248deb72a"} Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.412421 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:40 crc kubenswrapper[4695]: I1126 13:27:40.413052 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.449214 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.649562 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.650609 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.651544 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.651886 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.652577 4695 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.709622 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.709691 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.709729 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.709774 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.709782 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.709866 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.710087 4695 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.710112 4695 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.710125 4695 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.711574 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.712320 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.712717 4695 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.713240 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.811249 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e14ba187-7938-43e0-9dcd-89167e38546c-kube-api-access\") pod \"e14ba187-7938-43e0-9dcd-89167e38546c\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.811340 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-kubelet-dir\") pod \"e14ba187-7938-43e0-9dcd-89167e38546c\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.811427 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-var-lock\") pod \"e14ba187-7938-43e0-9dcd-89167e38546c\" (UID: \"e14ba187-7938-43e0-9dcd-89167e38546c\") " Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.811495 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e14ba187-7938-43e0-9dcd-89167e38546c" (UID: "e14ba187-7938-43e0-9dcd-89167e38546c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.811568 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-var-lock" (OuterVolumeSpecName: "var-lock") pod "e14ba187-7938-43e0-9dcd-89167e38546c" (UID: "e14ba187-7938-43e0-9dcd-89167e38546c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.811994 4695 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.812028 4695 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e14ba187-7938-43e0-9dcd-89167e38546c-var-lock\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.817618 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e14ba187-7938-43e0-9dcd-89167e38546c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e14ba187-7938-43e0-9dcd-89167e38546c" (UID: "e14ba187-7938-43e0-9dcd-89167e38546c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:27:41 crc kubenswrapper[4695]: I1126 13:27:41.914058 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e14ba187-7938-43e0-9dcd-89167e38546c-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:42 crc kubenswrapper[4695]: E1126 13:27:42.212686 4695 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" volumeName="registry-storage" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.415934 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" podUID="29726719-a46b-4403-b241-5397d624f714" containerName="oauth-openshift" containerID="cri-o://6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f" gracePeriod=15 Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.458135 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.458460 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"e14ba187-7938-43e0-9dcd-89167e38546c","Type":"ContainerDied","Data":"d277d1bf4a07a71e42fef8d634c7616bc1e8b664ba707a22c3e8dffb43dfe7a1"} Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.458508 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d277d1bf4a07a71e42fef8d634c7616bc1e8b664ba707a22c3e8dffb43dfe7a1" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.463497 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.464373 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a" exitCode=0 Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.464445 4695 scope.go:117] "RemoveContainer" containerID="4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.464537 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.497330 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.497501 4695 scope.go:117] "RemoveContainer" containerID="e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.497620 4695 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.497840 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.498133 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.498454 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.498803 4695 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.512605 4695 scope.go:117] "RemoveContainer" containerID="d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.535221 4695 scope.go:117] "RemoveContainer" containerID="f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.563661 4695 scope.go:117] "RemoveContainer" containerID="cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.590035 4695 scope.go:117] "RemoveContainer" containerID="14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.615546 4695 scope.go:117] "RemoveContainer" containerID="4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441" Nov 26 13:27:42 crc kubenswrapper[4695]: E1126 13:27:42.616288 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\": container with ID starting with 4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441 not found: ID does not exist" containerID="4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.616390 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441"} err="failed to get container status \"4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\": rpc error: code = NotFound desc = could not find container \"4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441\": container with ID starting with 4eee4a7882be092c5cce2e5341102535f574698539ce40364dd7457f47f6a441 not found: ID does not exist" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.616426 4695 scope.go:117] "RemoveContainer" containerID="e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921" Nov 26 13:27:42 crc kubenswrapper[4695]: E1126 13:27:42.617069 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\": container with ID starting with e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921 not found: ID does not exist" containerID="e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.617142 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921"} err="failed to get container status \"e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\": rpc error: code = NotFound desc = could not find container \"e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921\": container with ID starting with e7b719decb28f0b67b2252f1ddce7d5e12c2640bf84a1e8ecb8907a7bbaf8921 not found: ID does not exist" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.617196 4695 scope.go:117] "RemoveContainer" containerID="d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462" Nov 26 13:27:42 crc kubenswrapper[4695]: E1126 13:27:42.617677 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\": container with ID starting with d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462 not found: ID does not exist" containerID="d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.617712 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462"} err="failed to get container status \"d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\": rpc error: code = NotFound desc = could not find container \"d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462\": container with ID starting with d9e3ef648747bef9f077ba2df67cbba73636f0cf499619d43af74abf1ea42462 not found: ID does not exist" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.617732 4695 scope.go:117] "RemoveContainer" containerID="f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540" Nov 26 13:27:42 crc kubenswrapper[4695]: E1126 13:27:42.617996 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\": container with ID starting with f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540 not found: ID does not exist" containerID="f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.618024 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540"} err="failed to get container status \"f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\": rpc error: code = NotFound desc = could not find container \"f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540\": container with ID starting with f5e4fa2aa36258d010bae3906fedbdda125517fd68a1d139f3b7e99e10fdf540 not found: ID does not exist" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.618045 4695 scope.go:117] "RemoveContainer" containerID="cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a" Nov 26 13:27:42 crc kubenswrapper[4695]: E1126 13:27:42.618279 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\": container with ID starting with cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a not found: ID does not exist" containerID="cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.618309 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a"} err="failed to get container status \"cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\": rpc error: code = NotFound desc = could not find container \"cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a\": container with ID starting with cfa0154944d4e748fc28f64d0951dff882a44b49ac606eba53a859c18ecbcf2a not found: ID does not exist" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.618326 4695 scope.go:117] "RemoveContainer" containerID="14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e" Nov 26 13:27:42 crc kubenswrapper[4695]: E1126 13:27:42.618607 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\": container with ID starting with 14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e not found: ID does not exist" containerID="14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.618634 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e"} err="failed to get container status \"14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\": rpc error: code = NotFound desc = could not find container \"14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e\": container with ID starting with 14730a58e02b1dbf7e35205a16b98ffb130a0cb607b588865fac6b06564d592e not found: ID does not exist" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.826825 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.827293 4695 status_manager.go:851] "Failed to get status for pod" podUID="29726719-a46b-4403-b241-5397d624f714" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mxj48\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.827514 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.827862 4695 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.828420 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946256 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-provider-selection\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946310 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-router-certs\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946390 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-idp-0-file-data\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946413 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-error\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946448 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-login\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946489 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-serving-cert\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946520 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-ocp-branding-template\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946546 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-audit-policies\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946569 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-service-ca\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946595 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-trusted-ca-bundle\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946614 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vtv6\" (UniqueName: \"kubernetes.io/projected/29726719-a46b-4403-b241-5397d624f714-kube-api-access-2vtv6\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946634 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-cliconfig\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946658 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29726719-a46b-4403-b241-5397d624f714-audit-dir\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.946677 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-session\") pod \"29726719-a46b-4403-b241-5397d624f714\" (UID: \"29726719-a46b-4403-b241-5397d624f714\") " Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.948262 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.948328 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.948860 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.948984 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29726719-a46b-4403-b241-5397d624f714-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.949232 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.955486 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.955951 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.956218 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.956720 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.957016 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.959196 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29726719-a46b-4403-b241-5397d624f714-kube-api-access-2vtv6" (OuterVolumeSpecName: "kube-api-access-2vtv6") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "kube-api-access-2vtv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.961433 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.963534 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:42 crc kubenswrapper[4695]: I1126 13:27:42.963830 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "29726719-a46b-4403-b241-5397d624f714" (UID: "29726719-a46b-4403-b241-5397d624f714"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048107 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048151 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048161 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048171 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048180 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048191 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048201 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048210 4695 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048218 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048228 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2vtv6\" (UniqueName: \"kubernetes.io/projected/29726719-a46b-4403-b241-5397d624f714-kube-api-access-2vtv6\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048236 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048246 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048255 4695 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/29726719-a46b-4403-b241-5397d624f714-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.048263 4695 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/29726719-a46b-4403-b241-5397d624f714-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.182380 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.480134 4695 generic.go:334] "Generic (PLEG): container finished" podID="29726719-a46b-4403-b241-5397d624f714" containerID="6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f" exitCode=0 Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.480187 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" event={"ID":"29726719-a46b-4403-b241-5397d624f714","Type":"ContainerDied","Data":"6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f"} Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.480214 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" event={"ID":"29726719-a46b-4403-b241-5397d624f714","Type":"ContainerDied","Data":"388fa39dafb3321e586dc50bfc45c54de50a7b6f38abaa3eaae25a44e014931e"} Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.480232 4695 scope.go:117] "RemoveContainer" containerID="6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.480273 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.481420 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.481667 4695 status_manager.go:851] "Failed to get status for pod" podUID="29726719-a46b-4403-b241-5397d624f714" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mxj48\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.482096 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.489482 4695 status_manager.go:851] "Failed to get status for pod" podUID="29726719-a46b-4403-b241-5397d624f714" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mxj48\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.490206 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.490731 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.512886 4695 scope.go:117] "RemoveContainer" containerID="6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f" Nov 26 13:27:43 crc kubenswrapper[4695]: E1126 13:27:43.513399 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f\": container with ID starting with 6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f not found: ID does not exist" containerID="6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f" Nov 26 13:27:43 crc kubenswrapper[4695]: I1126 13:27:43.513443 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f"} err="failed to get container status \"6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f\": rpc error: code = NotFound desc = could not find container \"6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f\": container with ID starting with 6621f5cc0c4f0f4605a52903a2bf95a6504cfdc3d618ed329f651cd5152abd0f not found: ID does not exist" Nov 26 13:27:47 crc kubenswrapper[4695]: I1126 13:27:47.165508 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: I1126 13:27:47.166660 4695 status_manager.go:851] "Failed to get status for pod" podUID="29726719-a46b-4403-b241-5397d624f714" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mxj48\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: I1126 13:27:47.167233 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: E1126 13:27:47.876142 4695 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: E1126 13:27:47.877142 4695 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: E1126 13:27:47.877668 4695 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: E1126 13:27:47.878105 4695 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: E1126 13:27:47.878662 4695 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:47 crc kubenswrapper[4695]: I1126 13:27:47.878743 4695 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 26 13:27:47 crc kubenswrapper[4695]: E1126 13:27:47.879105 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="200ms" Nov 26 13:27:48 crc kubenswrapper[4695]: E1126 13:27:48.022875 4695 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.156:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b9180632460fb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-26 13:27:39.650851067 +0000 UTC m=+243.286676139,LastTimestamp:2025-11-26 13:27:39.650851067 +0000 UTC m=+243.286676139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 26 13:27:48 crc kubenswrapper[4695]: E1126 13:27:48.080337 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="400ms" Nov 26 13:27:48 crc kubenswrapper[4695]: E1126 13:27:48.481785 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="800ms" Nov 26 13:27:49 crc kubenswrapper[4695]: E1126 13:27:49.283708 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="1.6s" Nov 26 13:27:50 crc kubenswrapper[4695]: E1126 13:27:50.884793 4695 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.156:6443: connect: connection refused" interval="3.2s" Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.161880 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.162848 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.163495 4695 status_manager.go:851] "Failed to get status for pod" podUID="29726719-a46b-4403-b241-5397d624f714" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mxj48\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.164041 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.184083 4695 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.184150 4695 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:27:51 crc kubenswrapper[4695]: E1126 13:27:51.184861 4695 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.185560 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:51 crc kubenswrapper[4695]: W1126 13:27:51.220648 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-797ddf2923d59e24b0e461f0a173dd0fe56a0bfb7907ef77fd5f53689d81ef5c WatchSource:0}: Error finding container 797ddf2923d59e24b0e461f0a173dd0fe56a0bfb7907ef77fd5f53689d81ef5c: Status 404 returned error can't find the container with id 797ddf2923d59e24b0e461f0a173dd0fe56a0bfb7907ef77fd5f53689d81ef5c Nov 26 13:27:51 crc kubenswrapper[4695]: I1126 13:27:51.529302 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"797ddf2923d59e24b0e461f0a173dd0fe56a0bfb7907ef77fd5f53689d81ef5c"} Nov 26 13:27:52 crc kubenswrapper[4695]: I1126 13:27:52.538752 4695 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="404830560bcc9a0d5b578be5d5e2832fe27bff95d8baae4504fdbe0bf5651b31" exitCode=0 Nov 26 13:27:52 crc kubenswrapper[4695]: I1126 13:27:52.538830 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"404830560bcc9a0d5b578be5d5e2832fe27bff95d8baae4504fdbe0bf5651b31"} Nov 26 13:27:52 crc kubenswrapper[4695]: I1126 13:27:52.539460 4695 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:27:52 crc kubenswrapper[4695]: I1126 13:27:52.539513 4695 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:27:52 crc kubenswrapper[4695]: I1126 13:27:52.540182 4695 status_manager.go:851] "Failed to get status for pod" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:52 crc kubenswrapper[4695]: E1126 13:27:52.540326 4695 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:52 crc kubenswrapper[4695]: I1126 13:27:52.540725 4695 status_manager.go:851] "Failed to get status for pod" podUID="29726719-a46b-4403-b241-5397d624f714" pod="openshift-authentication/oauth-openshift-558db77b4-mxj48" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-mxj48\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:52 crc kubenswrapper[4695]: I1126 13:27:52.541295 4695 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.156:6443: connect: connection refused" Nov 26 13:27:53 crc kubenswrapper[4695]: I1126 13:27:53.548928 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 13:27:53 crc kubenswrapper[4695]: I1126 13:27:53.549026 4695 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc" exitCode=1 Nov 26 13:27:53 crc kubenswrapper[4695]: I1126 13:27:53.549107 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc"} Nov 26 13:27:53 crc kubenswrapper[4695]: I1126 13:27:53.549723 4695 scope.go:117] "RemoveContainer" containerID="4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc" Nov 26 13:27:53 crc kubenswrapper[4695]: I1126 13:27:53.559691 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c17a3c6ded3a15a744eb53220bc83eb2719b020b6463150c9f106ec64edf2de6"} Nov 26 13:27:53 crc kubenswrapper[4695]: I1126 13:27:53.559750 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1bbb2c10c8dfa86d513afb563452f9d2d68ea14f93aec5451e32674f1bab498d"} Nov 26 13:27:53 crc kubenswrapper[4695]: I1126 13:27:53.559762 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"37260dec193ec57ed2fcd8b3e028c3189d98dd4e1a1df53b783d70a5e690e3df"} Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.529850 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.589158 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.589304 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5876c3ab80044b37d3e6c80a2b8d564a17e621ad8fe89d54f59a59c6e94ee257"} Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.596173 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1d9f9eff261be240800e30ff328b56df80414acb27d00b4860f83f5b742ac3da"} Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.596250 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b6c4978cc3671f6e90c05027bcabcb1265f97eead28e33ceb24162f82b60a16d"} Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.596677 4695 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.596706 4695 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:27:54 crc kubenswrapper[4695]: I1126 13:27:54.597300 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:56 crc kubenswrapper[4695]: I1126 13:27:56.186294 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:56 crc kubenswrapper[4695]: I1126 13:27:56.186786 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:56 crc kubenswrapper[4695]: I1126 13:27:56.193229 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:59 crc kubenswrapper[4695]: I1126 13:27:59.612325 4695 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:27:59 crc kubenswrapper[4695]: I1126 13:27:59.688453 4695 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="30204d98-f5ce-465f-a38d-9f5a0e5b0690" Nov 26 13:28:00 crc kubenswrapper[4695]: I1126 13:28:00.432716 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:28:00 crc kubenswrapper[4695]: I1126 13:28:00.631005 4695 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:28:00 crc kubenswrapper[4695]: I1126 13:28:00.631039 4695 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:28:00 crc kubenswrapper[4695]: I1126 13:28:00.635311 4695 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="30204d98-f5ce-465f-a38d-9f5a0e5b0690" Nov 26 13:28:00 crc kubenswrapper[4695]: I1126 13:28:00.636595 4695 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://37260dec193ec57ed2fcd8b3e028c3189d98dd4e1a1df53b783d70a5e690e3df" Nov 26 13:28:00 crc kubenswrapper[4695]: I1126 13:28:00.636627 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:28:01 crc kubenswrapper[4695]: I1126 13:28:01.636570 4695 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:28:01 crc kubenswrapper[4695]: I1126 13:28:01.636615 4695 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="9294908a-fb2d-4b41-b754-46ae6e357e11" Nov 26 13:28:01 crc kubenswrapper[4695]: I1126 13:28:01.639635 4695 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="30204d98-f5ce-465f-a38d-9f5a0e5b0690" Nov 26 13:28:04 crc kubenswrapper[4695]: I1126 13:28:04.529993 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:28:04 crc kubenswrapper[4695]: I1126 13:28:04.531959 4695 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 26 13:28:04 crc kubenswrapper[4695]: I1126 13:28:04.532195 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 26 13:28:10 crc kubenswrapper[4695]: I1126 13:28:10.656870 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 26 13:28:10 crc kubenswrapper[4695]: I1126 13:28:10.679087 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 26 13:28:10 crc kubenswrapper[4695]: I1126 13:28:10.800578 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.178022 4695 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.461783 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.476638 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.649477 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.742823 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.751644 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.761442 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 26 13:28:11 crc kubenswrapper[4695]: I1126 13:28:11.917628 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.000678 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.005110 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.161458 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.318674 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.381287 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.414888 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.444016 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.539551 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.594342 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.739752 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.748141 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 26 13:28:12 crc kubenswrapper[4695]: I1126 13:28:12.933587 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.020510 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.030727 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.088992 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.349897 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.472476 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.523604 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.784433 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.786411 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 26 13:28:13 crc kubenswrapper[4695]: I1126 13:28:13.941093 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.358332 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.381973 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.433230 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.454827 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.456566 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.470194 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.530207 4695 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.530298 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.601647 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.623449 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.649620 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.665099 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.672624 4695 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.683286 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.707558 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.711122 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.747230 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.795823 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.840320 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.888984 4695 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.890888 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=35.890863385 podStartE2EDuration="35.890863385s" podCreationTimestamp="2025-11-26 13:27:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:27:59.614971575 +0000 UTC m=+263.250796687" watchObservedRunningTime="2025-11-26 13:28:14.890863385 +0000 UTC m=+278.526688467" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.894736 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-mxj48"] Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.894809 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.899491 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.914457 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.914433072 podStartE2EDuration="15.914433072s" podCreationTimestamp="2025-11-26 13:27:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:28:14.910927969 +0000 UTC m=+278.546753051" watchObservedRunningTime="2025-11-26 13:28:14.914433072 +0000 UTC m=+278.550258154" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.923887 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.954297 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 26 13:28:14 crc kubenswrapper[4695]: I1126 13:28:14.972947 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.004135 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.033189 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.059931 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.101987 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.121966 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.148653 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.174733 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29726719-a46b-4403-b241-5397d624f714" path="/var/lib/kubelet/pods/29726719-a46b-4403-b241-5397d624f714/volumes" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.183812 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.472553 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.475001 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.485004 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.640450 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.668409 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.791163 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.807259 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.815061 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.893989 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 26 13:28:15 crc kubenswrapper[4695]: I1126 13:28:15.922884 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.088947 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.091932 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.093164 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.161690 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.173814 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.202649 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.319091 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.372747 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.511842 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.522309 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.534711 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.546971 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.663371 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.698808 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.716524 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.824726 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.858196 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.903716 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.988768 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 26 13:28:16 crc kubenswrapper[4695]: I1126 13:28:16.998099 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.010664 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.046206 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.050361 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.086544 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.094648 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.163009 4695 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.257646 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.284933 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.310474 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.333790 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.395284 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.477471 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.498008 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.506227 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.516009 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.569751 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.570308 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.614809 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.731459 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.764337 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.800748 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 26 13:28:17 crc kubenswrapper[4695]: I1126 13:28:17.852502 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.146519 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.147277 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.150778 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.253662 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.275925 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.448266 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.533099 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.588937 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.609860 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.620722 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-9565f95f5-2q8fs"] Nov 26 13:28:18 crc kubenswrapper[4695]: E1126 13:28:18.620969 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" containerName="installer" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.620982 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" containerName="installer" Nov 26 13:28:18 crc kubenswrapper[4695]: E1126 13:28:18.620997 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29726719-a46b-4403-b241-5397d624f714" containerName="oauth-openshift" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.621003 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="29726719-a46b-4403-b241-5397d624f714" containerName="oauth-openshift" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.621095 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e14ba187-7938-43e0-9dcd-89167e38546c" containerName="installer" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.621106 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="29726719-a46b-4403-b241-5397d624f714" containerName="oauth-openshift" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.621546 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.623325 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.623525 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.623921 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.624341 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.624411 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.624711 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.624965 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.624979 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.625271 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.625913 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.626127 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.626291 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.632698 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.637081 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645101 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645165 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-service-ca\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645320 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-session\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645486 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645371 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645664 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645785 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-error\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645852 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645936 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-audit-policies\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.645991 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.646028 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-router-certs\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.646197 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlg9x\" (UniqueName: \"kubernetes.io/projected/4f02f520-c294-4f73-a1f9-c15629c7f630-kube-api-access-rlg9x\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.646268 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.646302 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f02f520-c294-4f73-a1f9-c15629c7f630-audit-dir\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.646331 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-login\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.690011 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.697528 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.701389 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747403 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747493 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f02f520-c294-4f73-a1f9-c15629c7f630-audit-dir\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747544 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-login\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747571 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4f02f520-c294-4f73-a1f9-c15629c7f630-audit-dir\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747593 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747717 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-service-ca\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747754 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-session\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747778 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747866 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747918 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-error\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.747950 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.748005 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-audit-policies\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.748044 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.748065 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-router-certs\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.748143 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlg9x\" (UniqueName: \"kubernetes.io/projected/4f02f520-c294-4f73-a1f9-c15629c7f630-kube-api-access-rlg9x\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.748591 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.750008 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-cliconfig\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.750105 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-audit-policies\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.751575 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-service-ca\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.754445 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.754845 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.755183 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-login\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.755575 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-serving-cert\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.758126 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-user-template-error\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.758376 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-session\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.758399 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.760918 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/4f02f520-c294-4f73-a1f9-c15629c7f630-v4-0-config-system-router-certs\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.767178 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlg9x\" (UniqueName: \"kubernetes.io/projected/4f02f520-c294-4f73-a1f9-c15629c7f630-kube-api-access-rlg9x\") pod \"oauth-openshift-9565f95f5-2q8fs\" (UID: \"4f02f520-c294-4f73-a1f9-c15629c7f630\") " pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.769612 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.773826 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.835978 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.900683 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.906436 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.924939 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.939929 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.943959 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 26 13:28:18 crc kubenswrapper[4695]: I1126 13:28:18.977028 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.047703 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.149143 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.184368 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.304689 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.338913 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.344039 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.365474 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.480659 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.577881 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.590638 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.590815 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.613928 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.650436 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.657413 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.686533 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.732467 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.738295 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.770050 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.792330 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.850004 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.932695 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 26 13:28:19 crc kubenswrapper[4695]: I1126 13:28:19.987607 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.111756 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.169227 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.237107 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.256001 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.336912 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.350745 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.357396 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.516573 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.657926 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.658511 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.676073 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.744755 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.746458 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.778836 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.819932 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.851378 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.902685 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:28:20 crc kubenswrapper[4695]: I1126 13:28:20.942749 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.002735 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.063019 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.071194 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.143756 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.171817 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.180705 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.204036 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.273269 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9565f95f5-2q8fs"] Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.297050 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.305900 4695 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.624213 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.780203 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.813381 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-9565f95f5-2q8fs"] Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.816416 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.915552 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.923466 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 26 13:28:21 crc kubenswrapper[4695]: I1126 13:28:21.977965 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.204703 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.324425 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.380156 4695 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.380413 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://cea323c1dc0930e89aab60e24045c64ea25d050f5ce56bd03b3fb0de0ef3a96b" gracePeriod=5 Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.393042 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.420058 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.651585 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.742277 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.754601 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.756741 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.759423 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.768879 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.794363 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" event={"ID":"4f02f520-c294-4f73-a1f9-c15629c7f630","Type":"ContainerStarted","Data":"7cdb748cd647c70b362c49b2319ca27dc2c40ef918f4cd178833bb58495f7223"} Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.794422 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" event={"ID":"4f02f520-c294-4f73-a1f9-c15629c7f630","Type":"ContainerStarted","Data":"a17418d6bb53b86fdbeb01de51768202acada1e538346e00a2119ab286851953"} Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.794751 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.800100 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.817887 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-9565f95f5-2q8fs" podStartSLOduration=65.817864466 podStartE2EDuration="1m5.817864466s" podCreationTimestamp="2025-11-26 13:27:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:28:22.813846176 +0000 UTC m=+286.449671258" watchObservedRunningTime="2025-11-26 13:28:22.817864466 +0000 UTC m=+286.453689568" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.824701 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.832272 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 26 13:28:22 crc kubenswrapper[4695]: I1126 13:28:22.925221 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.004423 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.167636 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.183593 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.324413 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.324862 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.556861 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.597000 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.605953 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.635746 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.727232 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.762589 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.899537 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.914380 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:28:23 crc kubenswrapper[4695]: I1126 13:28:23.922328 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.174499 4695 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.196996 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.288755 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.336488 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.415119 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.528130 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.530126 4695 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.530210 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.530296 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.531294 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"5876c3ab80044b37d3e6c80a2b8d564a17e621ad8fe89d54f59a59c6e94ee257"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.531603 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://5876c3ab80044b37d3e6c80a2b8d564a17e621ad8fe89d54f59a59c6e94ee257" gracePeriod=30 Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.671259 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.765089 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.784976 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.819137 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:28:24 crc kubenswrapper[4695]: I1126 13:28:24.966076 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 26 13:28:25 crc kubenswrapper[4695]: I1126 13:28:25.226070 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 26 13:28:25 crc kubenswrapper[4695]: I1126 13:28:25.366900 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 26 13:28:25 crc kubenswrapper[4695]: I1126 13:28:25.598734 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 26 13:28:25 crc kubenswrapper[4695]: I1126 13:28:25.852544 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 26 13:28:25 crc kubenswrapper[4695]: I1126 13:28:25.955763 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 26 13:28:26 crc kubenswrapper[4695]: I1126 13:28:26.012964 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:28:26 crc kubenswrapper[4695]: I1126 13:28:26.205022 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 26 13:28:26 crc kubenswrapper[4695]: I1126 13:28:26.532656 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 26 13:28:26 crc kubenswrapper[4695]: I1126 13:28:26.612911 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 26 13:28:26 crc kubenswrapper[4695]: I1126 13:28:26.818365 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 26 13:28:26 crc kubenswrapper[4695]: I1126 13:28:26.950098 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 26 13:28:27 crc kubenswrapper[4695]: I1126 13:28:27.826959 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 13:28:27 crc kubenswrapper[4695]: I1126 13:28:27.827433 4695 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="cea323c1dc0930e89aab60e24045c64ea25d050f5ce56bd03b3fb0de0ef3a96b" exitCode=137 Nov 26 13:28:27 crc kubenswrapper[4695]: I1126 13:28:27.985218 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 13:28:27 crc kubenswrapper[4695]: I1126 13:28:27.985614 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.078634 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.078736 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.078840 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.078866 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.078954 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.078968 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.079000 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.079104 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.079332 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.079535 4695 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.079597 4695 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.079623 4695 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.079647 4695 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.087501 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.181762 4695 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.599235 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.839748 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.839863 4695 scope.go:117] "RemoveContainer" containerID="cea323c1dc0930e89aab60e24045c64ea25d050f5ce56bd03b3fb0de0ef3a96b" Nov 26 13:28:28 crc kubenswrapper[4695]: I1126 13:28:28.840026 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 26 13:28:29 crc kubenswrapper[4695]: I1126 13:28:29.174096 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 26 13:28:29 crc kubenswrapper[4695]: I1126 13:28:29.174324 4695 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Nov 26 13:28:29 crc kubenswrapper[4695]: I1126 13:28:29.184889 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 13:28:29 crc kubenswrapper[4695]: I1126 13:28:29.184940 4695 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ee9d135d-4717-42c6-b764-9199f4f02d7c" Nov 26 13:28:29 crc kubenswrapper[4695]: I1126 13:28:29.188035 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 26 13:28:29 crc kubenswrapper[4695]: I1126 13:28:29.188057 4695 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ee9d135d-4717-42c6-b764-9199f4f02d7c" Nov 26 13:28:36 crc kubenswrapper[4695]: I1126 13:28:36.950288 4695 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 26 13:28:55 crc kubenswrapper[4695]: I1126 13:28:55.017591 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 26 13:28:55 crc kubenswrapper[4695]: I1126 13:28:55.019329 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 26 13:28:55 crc kubenswrapper[4695]: I1126 13:28:55.019379 4695 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="5876c3ab80044b37d3e6c80a2b8d564a17e621ad8fe89d54f59a59c6e94ee257" exitCode=137 Nov 26 13:28:55 crc kubenswrapper[4695]: I1126 13:28:55.019416 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"5876c3ab80044b37d3e6c80a2b8d564a17e621ad8fe89d54f59a59c6e94ee257"} Nov 26 13:28:55 crc kubenswrapper[4695]: I1126 13:28:55.019445 4695 scope.go:117] "RemoveContainer" containerID="4ad699ddc00399ddf556d5efeb394a5d7edc4fdd5ef31354d543f9ba8f9b0dbc" Nov 26 13:28:56 crc kubenswrapper[4695]: I1126 13:28:56.051289 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Nov 26 13:28:56 crc kubenswrapper[4695]: I1126 13:28:56.054254 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b7e8e5228a0f3d7d456820d732a36ac2b6bd5867588ac1340161c5a3ae4cb27e"} Nov 26 13:29:00 crc kubenswrapper[4695]: I1126 13:29:00.432784 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:29:04 crc kubenswrapper[4695]: I1126 13:29:04.530086 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:29:04 crc kubenswrapper[4695]: I1126 13:29:04.533935 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:29:05 crc kubenswrapper[4695]: I1126 13:29:05.115313 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.284743 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp"] Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.285521 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" podUID="7f633ea2-f78f-4a36-8f28-13c2f053c349" containerName="route-controller-manager" containerID="cri-o://e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06" gracePeriod=30 Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.365757 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5mdsv"] Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.366016 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" podUID="7c5213f4-2ee5-4136-b62c-7b291044e467" containerName="controller-manager" containerID="cri-o://68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25" gracePeriod=30 Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.722571 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.728688 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.844486 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-client-ca\") pod \"7c5213f4-2ee5-4136-b62c-7b291044e467\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.844881 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5213f4-2ee5-4136-b62c-7b291044e467-serving-cert\") pod \"7c5213f4-2ee5-4136-b62c-7b291044e467\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.844943 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4gbz\" (UniqueName: \"kubernetes.io/projected/7c5213f4-2ee5-4136-b62c-7b291044e467-kube-api-access-p4gbz\") pod \"7c5213f4-2ee5-4136-b62c-7b291044e467\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.844973 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-config\") pod \"7f633ea2-f78f-4a36-8f28-13c2f053c349\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845015 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-config\") pod \"7c5213f4-2ee5-4136-b62c-7b291044e467\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845065 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f633ea2-f78f-4a36-8f28-13c2f053c349-serving-cert\") pod \"7f633ea2-f78f-4a36-8f28-13c2f053c349\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845114 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-proxy-ca-bundles\") pod \"7c5213f4-2ee5-4136-b62c-7b291044e467\" (UID: \"7c5213f4-2ee5-4136-b62c-7b291044e467\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845138 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-client-ca\") pod \"7f633ea2-f78f-4a36-8f28-13c2f053c349\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845183 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlq8c\" (UniqueName: \"kubernetes.io/projected/7f633ea2-f78f-4a36-8f28-13c2f053c349-kube-api-access-hlq8c\") pod \"7f633ea2-f78f-4a36-8f28-13c2f053c349\" (UID: \"7f633ea2-f78f-4a36-8f28-13c2f053c349\") " Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845251 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-client-ca" (OuterVolumeSpecName: "client-ca") pod "7c5213f4-2ee5-4136-b62c-7b291044e467" (UID: "7c5213f4-2ee5-4136-b62c-7b291044e467"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845656 4695 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.845891 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7c5213f4-2ee5-4136-b62c-7b291044e467" (UID: "7c5213f4-2ee5-4136-b62c-7b291044e467"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.846012 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-client-ca" (OuterVolumeSpecName: "client-ca") pod "7f633ea2-f78f-4a36-8f28-13c2f053c349" (UID: "7f633ea2-f78f-4a36-8f28-13c2f053c349"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.846060 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-config" (OuterVolumeSpecName: "config") pod "7c5213f4-2ee5-4136-b62c-7b291044e467" (UID: "7c5213f4-2ee5-4136-b62c-7b291044e467"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.846620 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-config" (OuterVolumeSpecName: "config") pod "7f633ea2-f78f-4a36-8f28-13c2f053c349" (UID: "7f633ea2-f78f-4a36-8f28-13c2f053c349"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.850488 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f633ea2-f78f-4a36-8f28-13c2f053c349-kube-api-access-hlq8c" (OuterVolumeSpecName: "kube-api-access-hlq8c") pod "7f633ea2-f78f-4a36-8f28-13c2f053c349" (UID: "7f633ea2-f78f-4a36-8f28-13c2f053c349"). InnerVolumeSpecName "kube-api-access-hlq8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.850530 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f633ea2-f78f-4a36-8f28-13c2f053c349-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7f633ea2-f78f-4a36-8f28-13c2f053c349" (UID: "7f633ea2-f78f-4a36-8f28-13c2f053c349"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.850559 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c5213f4-2ee5-4136-b62c-7b291044e467-kube-api-access-p4gbz" (OuterVolumeSpecName: "kube-api-access-p4gbz") pod "7c5213f4-2ee5-4136-b62c-7b291044e467" (UID: "7c5213f4-2ee5-4136-b62c-7b291044e467"). InnerVolumeSpecName "kube-api-access-p4gbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.850592 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c5213f4-2ee5-4136-b62c-7b291044e467-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c5213f4-2ee5-4136-b62c-7b291044e467" (UID: "7c5213f4-2ee5-4136-b62c-7b291044e467"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947322 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7f633ea2-f78f-4a36-8f28-13c2f053c349-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947372 4695 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947384 4695 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947393 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlq8c\" (UniqueName: \"kubernetes.io/projected/7f633ea2-f78f-4a36-8f28-13c2f053c349-kube-api-access-hlq8c\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947402 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c5213f4-2ee5-4136-b62c-7b291044e467-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947410 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4gbz\" (UniqueName: \"kubernetes.io/projected/7c5213f4-2ee5-4136-b62c-7b291044e467-kube-api-access-p4gbz\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947419 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f633ea2-f78f-4a36-8f28-13c2f053c349-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:12 crc kubenswrapper[4695]: I1126 13:29:12.947430 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c5213f4-2ee5-4136-b62c-7b291044e467-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.167277 4695 generic.go:334] "Generic (PLEG): container finished" podID="7c5213f4-2ee5-4136-b62c-7b291044e467" containerID="68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25" exitCode=0 Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.167428 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.168926 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" event={"ID":"7c5213f4-2ee5-4136-b62c-7b291044e467","Type":"ContainerDied","Data":"68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25"} Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.168982 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5mdsv" event={"ID":"7c5213f4-2ee5-4136-b62c-7b291044e467","Type":"ContainerDied","Data":"27dce5d2b1550679f7a1beb323687e9653c83fda0e6000d4f7d08c94b4e8fc39"} Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.169005 4695 scope.go:117] "RemoveContainer" containerID="68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.169041 4695 generic.go:334] "Generic (PLEG): container finished" podID="7f633ea2-f78f-4a36-8f28-13c2f053c349" containerID="e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06" exitCode=0 Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.169106 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.169102 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" event={"ID":"7f633ea2-f78f-4a36-8f28-13c2f053c349","Type":"ContainerDied","Data":"e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06"} Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.169156 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp" event={"ID":"7f633ea2-f78f-4a36-8f28-13c2f053c349","Type":"ContainerDied","Data":"d45de231aba42616981b31ab568d44cee447efdb0daf269f8a8f251032d6861e"} Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.196597 4695 scope.go:117] "RemoveContainer" containerID="68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25" Nov 26 13:29:13 crc kubenswrapper[4695]: E1126 13:29:13.197464 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25\": container with ID starting with 68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25 not found: ID does not exist" containerID="68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.197502 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25"} err="failed to get container status \"68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25\": rpc error: code = NotFound desc = could not find container \"68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25\": container with ID starting with 68a7e1560e77d728b2fed474e9fa16553f737e0d5e173ef600531b2f594d9b25 not found: ID does not exist" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.197544 4695 scope.go:117] "RemoveContainer" containerID="e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.217787 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp"] Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.220808 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8q8mp"] Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.224234 4695 scope.go:117] "RemoveContainer" containerID="e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06" Nov 26 13:29:13 crc kubenswrapper[4695]: E1126 13:29:13.226723 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06\": container with ID starting with e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06 not found: ID does not exist" containerID="e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.226787 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06"} err="failed to get container status \"e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06\": rpc error: code = NotFound desc = could not find container \"e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06\": container with ID starting with e2e26f8cdab3619de29af05f4a3a59122b23b41c47faae40932408fa3b6fad06 not found: ID does not exist" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.228818 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5mdsv"] Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.231932 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5mdsv"] Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655019 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx"] Nov 26 13:29:13 crc kubenswrapper[4695]: E1126 13:29:13.655246 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c5213f4-2ee5-4136-b62c-7b291044e467" containerName="controller-manager" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655261 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c5213f4-2ee5-4136-b62c-7b291044e467" containerName="controller-manager" Nov 26 13:29:13 crc kubenswrapper[4695]: E1126 13:29:13.655271 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f633ea2-f78f-4a36-8f28-13c2f053c349" containerName="route-controller-manager" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655278 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f633ea2-f78f-4a36-8f28-13c2f053c349" containerName="route-controller-manager" Nov 26 13:29:13 crc kubenswrapper[4695]: E1126 13:29:13.655299 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655306 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655417 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f633ea2-f78f-4a36-8f28-13c2f053c349" containerName="route-controller-manager" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655428 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c5213f4-2ee5-4136-b62c-7b291044e467" containerName="controller-manager" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655439 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.655865 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.658957 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678c97f644-v5gsw"] Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.659577 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.660422 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.660459 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.660435 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.660986 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.661034 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.662025 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.664087 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.664567 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.666409 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.666446 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.666450 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.667939 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.673779 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.674930 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx"] Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.728061 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678c97f644-v5gsw"] Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.758034 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-proxy-ca-bundles\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.758392 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-client-ca\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.758516 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bw9\" (UniqueName: \"kubernetes.io/projected/16ba9f88-89df-46c7-b793-5935f8eeb62b-kube-api-access-q5bw9\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.758636 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-config\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.758760 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-client-ca\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.758891 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-config\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.759011 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ba9f88-89df-46c7-b793-5935f8eeb62b-serving-cert\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.759131 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fff46\" (UniqueName: \"kubernetes.io/projected/b354a9af-d040-469d-a53d-062c0742855c-kube-api-access-fff46\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.759224 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b354a9af-d040-469d-a53d-062c0742855c-serving-cert\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869120 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ba9f88-89df-46c7-b793-5935f8eeb62b-serving-cert\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869181 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fff46\" (UniqueName: \"kubernetes.io/projected/b354a9af-d040-469d-a53d-062c0742855c-kube-api-access-fff46\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869212 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b354a9af-d040-469d-a53d-062c0742855c-serving-cert\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869236 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-proxy-ca-bundles\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869275 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-client-ca\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869308 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5bw9\" (UniqueName: \"kubernetes.io/projected/16ba9f88-89df-46c7-b793-5935f8eeb62b-kube-api-access-q5bw9\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869334 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-config\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869376 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-client-ca\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.869416 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-config\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.871168 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-config\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.871207 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-client-ca\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.871577 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-client-ca\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.872149 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-proxy-ca-bundles\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.875792 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16ba9f88-89df-46c7-b793-5935f8eeb62b-serving-cert\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.877750 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b354a9af-d040-469d-a53d-062c0742855c-serving-cert\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.880045 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16ba9f88-89df-46c7-b793-5935f8eeb62b-config\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.896125 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5bw9\" (UniqueName: \"kubernetes.io/projected/16ba9f88-89df-46c7-b793-5935f8eeb62b-kube-api-access-q5bw9\") pod \"controller-manager-678c97f644-v5gsw\" (UID: \"16ba9f88-89df-46c7-b793-5935f8eeb62b\") " pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:13 crc kubenswrapper[4695]: I1126 13:29:13.898773 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fff46\" (UniqueName: \"kubernetes.io/projected/b354a9af-d040-469d-a53d-062c0742855c-kube-api-access-fff46\") pod \"route-controller-manager-64f75cbf5f-6zkrx\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:14 crc kubenswrapper[4695]: I1126 13:29:14.029787 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:14 crc kubenswrapper[4695]: I1126 13:29:14.044617 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:14 crc kubenswrapper[4695]: I1126 13:29:14.283390 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx"] Nov 26 13:29:14 crc kubenswrapper[4695]: I1126 13:29:14.467550 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678c97f644-v5gsw"] Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.169753 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c5213f4-2ee5-4136-b62c-7b291044e467" path="/var/lib/kubelet/pods/7c5213f4-2ee5-4136-b62c-7b291044e467/volumes" Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.170459 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f633ea2-f78f-4a36-8f28-13c2f053c349" path="/var/lib/kubelet/pods/7f633ea2-f78f-4a36-8f28-13c2f053c349/volumes" Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.191225 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" event={"ID":"16ba9f88-89df-46c7-b793-5935f8eeb62b","Type":"ContainerStarted","Data":"eb3106b99b41683ff84a5cc1dd8f4384e93abfca999eb8d0c04897ad0902954a"} Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.191305 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" event={"ID":"16ba9f88-89df-46c7-b793-5935f8eeb62b","Type":"ContainerStarted","Data":"99dae702fe59200100df3ddcedabe3b50f63d38b0d3fae8efd96d18ec649e909"} Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.193238 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.194758 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" event={"ID":"b354a9af-d040-469d-a53d-062c0742855c","Type":"ContainerStarted","Data":"3f71deda70b3cc3eab6bb1e06c2e41e0291c09c2b287b9ddca068cfa10b68009"} Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.194784 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" event={"ID":"b354a9af-d040-469d-a53d-062c0742855c","Type":"ContainerStarted","Data":"ec23586139e73221f4c91446286f887d5148154041dc580541a432234f5a196b"} Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.195069 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.200613 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.230748 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-678c97f644-v5gsw" podStartSLOduration=3.230729783 podStartE2EDuration="3.230729783s" podCreationTimestamp="2025-11-26 13:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:29:15.214171683 +0000 UTC m=+338.849996765" watchObservedRunningTime="2025-11-26 13:29:15.230729783 +0000 UTC m=+338.866554865" Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.243211 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:15 crc kubenswrapper[4695]: I1126 13:29:15.267300 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" podStartSLOduration=3.267278723 podStartE2EDuration="3.267278723s" podCreationTimestamp="2025-11-26 13:29:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:29:15.250501096 +0000 UTC m=+338.886326178" watchObservedRunningTime="2025-11-26 13:29:15.267278723 +0000 UTC m=+338.903103805" Nov 26 13:29:29 crc kubenswrapper[4695]: I1126 13:29:29.886021 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx"] Nov 26 13:29:29 crc kubenswrapper[4695]: I1126 13:29:29.886862 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" podUID="b354a9af-d040-469d-a53d-062c0742855c" containerName="route-controller-manager" containerID="cri-o://3f71deda70b3cc3eab6bb1e06c2e41e0291c09c2b287b9ddca068cfa10b68009" gracePeriod=30 Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.294073 4695 generic.go:334] "Generic (PLEG): container finished" podID="b354a9af-d040-469d-a53d-062c0742855c" containerID="3f71deda70b3cc3eab6bb1e06c2e41e0291c09c2b287b9ddca068cfa10b68009" exitCode=0 Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.294129 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" event={"ID":"b354a9af-d040-469d-a53d-062c0742855c","Type":"ContainerDied","Data":"3f71deda70b3cc3eab6bb1e06c2e41e0291c09c2b287b9ddca068cfa10b68009"} Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.391933 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.518661 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fff46\" (UniqueName: \"kubernetes.io/projected/b354a9af-d040-469d-a53d-062c0742855c-kube-api-access-fff46\") pod \"b354a9af-d040-469d-a53d-062c0742855c\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.518795 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-config\") pod \"b354a9af-d040-469d-a53d-062c0742855c\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.518900 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b354a9af-d040-469d-a53d-062c0742855c-serving-cert\") pod \"b354a9af-d040-469d-a53d-062c0742855c\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.518936 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-client-ca\") pod \"b354a9af-d040-469d-a53d-062c0742855c\" (UID: \"b354a9af-d040-469d-a53d-062c0742855c\") " Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.520034 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-client-ca" (OuterVolumeSpecName: "client-ca") pod "b354a9af-d040-469d-a53d-062c0742855c" (UID: "b354a9af-d040-469d-a53d-062c0742855c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.520246 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-config" (OuterVolumeSpecName: "config") pod "b354a9af-d040-469d-a53d-062c0742855c" (UID: "b354a9af-d040-469d-a53d-062c0742855c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.525285 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b354a9af-d040-469d-a53d-062c0742855c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b354a9af-d040-469d-a53d-062c0742855c" (UID: "b354a9af-d040-469d-a53d-062c0742855c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.525006 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b354a9af-d040-469d-a53d-062c0742855c-kube-api-access-fff46" (OuterVolumeSpecName: "kube-api-access-fff46") pod "b354a9af-d040-469d-a53d-062c0742855c" (UID: "b354a9af-d040-469d-a53d-062c0742855c"). InnerVolumeSpecName "kube-api-access-fff46". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.620476 4695 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b354a9af-d040-469d-a53d-062c0742855c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.620533 4695 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-client-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.620558 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fff46\" (UniqueName: \"kubernetes.io/projected/b354a9af-d040-469d-a53d-062c0742855c-kube-api-access-fff46\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:30 crc kubenswrapper[4695]: I1126 13:29:30.620578 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b354a9af-d040-469d-a53d-062c0742855c-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.302214 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" event={"ID":"b354a9af-d040-469d-a53d-062c0742855c","Type":"ContainerDied","Data":"ec23586139e73221f4c91446286f887d5148154041dc580541a432234f5a196b"} Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.302291 4695 scope.go:117] "RemoveContainer" containerID="3f71deda70b3cc3eab6bb1e06c2e41e0291c09c2b287b9ddca068cfa10b68009" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.302307 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.331705 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx"] Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.338832 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64f75cbf5f-6zkrx"] Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.671651 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk"] Nov 26 13:29:31 crc kubenswrapper[4695]: E1126 13:29:31.672015 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b354a9af-d040-469d-a53d-062c0742855c" containerName="route-controller-manager" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.672055 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b354a9af-d040-469d-a53d-062c0742855c" containerName="route-controller-manager" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.672326 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b354a9af-d040-469d-a53d-062c0742855c" containerName="route-controller-manager" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.673045 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.676542 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.676990 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.677760 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.677879 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.678161 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.679201 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.692177 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk"] Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.835307 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbe8102-9260-4687-a05b-07c53382c1cc-client-ca\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.835988 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe8102-9260-4687-a05b-07c53382c1cc-config\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.836238 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebbe8102-9260-4687-a05b-07c53382c1cc-serving-cert\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.836501 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nxjx\" (UniqueName: \"kubernetes.io/projected/ebbe8102-9260-4687-a05b-07c53382c1cc-kube-api-access-7nxjx\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.937873 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbe8102-9260-4687-a05b-07c53382c1cc-client-ca\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.937989 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe8102-9260-4687-a05b-07c53382c1cc-config\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.938127 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebbe8102-9260-4687-a05b-07c53382c1cc-serving-cert\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.938222 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nxjx\" (UniqueName: \"kubernetes.io/projected/ebbe8102-9260-4687-a05b-07c53382c1cc-kube-api-access-7nxjx\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.940519 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebbe8102-9260-4687-a05b-07c53382c1cc-config\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.940552 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ebbe8102-9260-4687-a05b-07c53382c1cc-client-ca\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.946557 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebbe8102-9260-4687-a05b-07c53382c1cc-serving-cert\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:31 crc kubenswrapper[4695]: I1126 13:29:31.960135 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nxjx\" (UniqueName: \"kubernetes.io/projected/ebbe8102-9260-4687-a05b-07c53382c1cc-kube-api-access-7nxjx\") pod \"route-controller-manager-7c4f8d87f6-xwpdk\" (UID: \"ebbe8102-9260-4687-a05b-07c53382c1cc\") " pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:32 crc kubenswrapper[4695]: I1126 13:29:32.000821 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:32 crc kubenswrapper[4695]: I1126 13:29:32.488640 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk"] Nov 26 13:29:32 crc kubenswrapper[4695]: W1126 13:29:32.495997 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebbe8102_9260_4687_a05b_07c53382c1cc.slice/crio-c6cc371c2dfa55fab88e8c8946f86b8d6733dfbcc9d6fa1f7462accdec89576e WatchSource:0}: Error finding container c6cc371c2dfa55fab88e8c8946f86b8d6733dfbcc9d6fa1f7462accdec89576e: Status 404 returned error can't find the container with id c6cc371c2dfa55fab88e8c8946f86b8d6733dfbcc9d6fa1f7462accdec89576e Nov 26 13:29:33 crc kubenswrapper[4695]: I1126 13:29:33.173330 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b354a9af-d040-469d-a53d-062c0742855c" path="/var/lib/kubelet/pods/b354a9af-d040-469d-a53d-062c0742855c/volumes" Nov 26 13:29:33 crc kubenswrapper[4695]: I1126 13:29:33.315787 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" event={"ID":"ebbe8102-9260-4687-a05b-07c53382c1cc","Type":"ContainerStarted","Data":"70ede718f8700cdaffd51da0b4729b2adeeece8fb50e9c4de15a5ff1af6b43a3"} Nov 26 13:29:33 crc kubenswrapper[4695]: I1126 13:29:33.315848 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" event={"ID":"ebbe8102-9260-4687-a05b-07c53382c1cc","Type":"ContainerStarted","Data":"c6cc371c2dfa55fab88e8c8946f86b8d6733dfbcc9d6fa1f7462accdec89576e"} Nov 26 13:29:33 crc kubenswrapper[4695]: I1126 13:29:33.316199 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:33 crc kubenswrapper[4695]: I1126 13:29:33.322790 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" Nov 26 13:29:33 crc kubenswrapper[4695]: I1126 13:29:33.334062 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c4f8d87f6-xwpdk" podStartSLOduration=4.334044559 podStartE2EDuration="4.334044559s" podCreationTimestamp="2025-11-26 13:29:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:29:33.332948585 +0000 UTC m=+356.968773667" watchObservedRunningTime="2025-11-26 13:29:33.334044559 +0000 UTC m=+356.969869641" Nov 26 13:29:36 crc kubenswrapper[4695]: I1126 13:29:36.397384 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:29:36 crc kubenswrapper[4695]: I1126 13:29:36.398009 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.873080 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tvmf"] Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.873989 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9tvmf" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="registry-server" containerID="cri-o://143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40" gracePeriod=30 Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.882465 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84jk7"] Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.882832 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84jk7" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="registry-server" containerID="cri-o://35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9" gracePeriod=30 Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.891806 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzxm6"] Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.892096 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" podUID="fc485407-1013-4868-83de-ec51c4cdb030" containerName="marketplace-operator" containerID="cri-o://5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd" gracePeriod=30 Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.930897 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbmxm"] Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.931493 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sbmxm" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="registry-server" containerID="cri-o://b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6" gracePeriod=30 Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.937159 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9p6q"] Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.937642 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9p6q" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="registry-server" containerID="cri-o://2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3" gracePeriod=30 Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.945040 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xxs2"] Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.946764 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.951759 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xxs2"] Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.968684 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.968991 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dx7\" (UniqueName: \"kubernetes.io/projected/27975591-c7e9-4e85-96a9-2a1280f6e1f3-kube-api-access-68dx7\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:54 crc kubenswrapper[4695]: I1126 13:29:54.969131 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.073079 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.073499 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68dx7\" (UniqueName: \"kubernetes.io/projected/27975591-c7e9-4e85-96a9-2a1280f6e1f3-kube-api-access-68dx7\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.073543 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.075502 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.081178 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.096918 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68dx7\" (UniqueName: \"kubernetes.io/projected/27975591-c7e9-4e85-96a9-2a1280f6e1f3-kube-api-access-68dx7\") pod \"marketplace-operator-79b997595-9xxs2\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.335521 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.339844 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.370771 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.376748 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.409415 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.466993 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.474415 4695 generic.go:334] "Generic (PLEG): container finished" podID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerID="b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6" exitCode=0 Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.474489 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbmxm" event={"ID":"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb","Type":"ContainerDied","Data":"b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.474513 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sbmxm" event={"ID":"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb","Type":"ContainerDied","Data":"aaf44565da6a03d35ca2558e627a1144ed1d8a83504f237289ebd8298d73bcca"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.474530 4695 scope.go:117] "RemoveContainer" containerID="b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.474636 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sbmxm" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477579 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr95c\" (UniqueName: \"kubernetes.io/projected/1f6308de-c770-4097-807a-ea8d1fd17151-kube-api-access-vr95c\") pod \"1f6308de-c770-4097-807a-ea8d1fd17151\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477670 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df292\" (UniqueName: \"kubernetes.io/projected/4ede3477-0b5e-43ba-a074-244304777695-kube-api-access-df292\") pod \"4ede3477-0b5e-43ba-a074-244304777695\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477714 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-catalog-content\") pod \"1f6308de-c770-4097-807a-ea8d1fd17151\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477734 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-utilities\") pod \"838cffa7-c983-4531-9b48-8397076df516\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477752 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-utilities\") pod \"4ede3477-0b5e-43ba-a074-244304777695\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477778 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-catalog-content\") pod \"838cffa7-c983-4531-9b48-8397076df516\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477830 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-catalog-content\") pod \"4ede3477-0b5e-43ba-a074-244304777695\" (UID: \"4ede3477-0b5e-43ba-a074-244304777695\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477848 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-utilities\") pod \"1f6308de-c770-4097-807a-ea8d1fd17151\" (UID: \"1f6308de-c770-4097-807a-ea8d1fd17151\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.477868 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gxdm\" (UniqueName: \"kubernetes.io/projected/838cffa7-c983-4531-9b48-8397076df516-kube-api-access-5gxdm\") pod \"838cffa7-c983-4531-9b48-8397076df516\" (UID: \"838cffa7-c983-4531-9b48-8397076df516\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.481491 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-utilities" (OuterVolumeSpecName: "utilities") pod "4ede3477-0b5e-43ba-a074-244304777695" (UID: "4ede3477-0b5e-43ba-a074-244304777695"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.482293 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-utilities" (OuterVolumeSpecName: "utilities") pod "838cffa7-c983-4531-9b48-8397076df516" (UID: "838cffa7-c983-4531-9b48-8397076df516"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.482573 4695 generic.go:334] "Generic (PLEG): container finished" podID="838cffa7-c983-4531-9b48-8397076df516" containerID="143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40" exitCode=0 Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.485692 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tvmf" event={"ID":"838cffa7-c983-4531-9b48-8397076df516","Type":"ContainerDied","Data":"143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.485779 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9tvmf" event={"ID":"838cffa7-c983-4531-9b48-8397076df516","Type":"ContainerDied","Data":"8888cda98eac5af0755bb3c925f008ca9296e58c8938d57aefbd3586e53942ab"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.485898 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9tvmf" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.489596 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ede3477-0b5e-43ba-a074-244304777695-kube-api-access-df292" (OuterVolumeSpecName: "kube-api-access-df292") pod "4ede3477-0b5e-43ba-a074-244304777695" (UID: "4ede3477-0b5e-43ba-a074-244304777695"). InnerVolumeSpecName "kube-api-access-df292". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.489652 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-utilities" (OuterVolumeSpecName: "utilities") pod "1f6308de-c770-4097-807a-ea8d1fd17151" (UID: "1f6308de-c770-4097-807a-ea8d1fd17151"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.491534 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6308de-c770-4097-807a-ea8d1fd17151-kube-api-access-vr95c" (OuterVolumeSpecName: "kube-api-access-vr95c") pod "1f6308de-c770-4097-807a-ea8d1fd17151" (UID: "1f6308de-c770-4097-807a-ea8d1fd17151"). InnerVolumeSpecName "kube-api-access-vr95c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.503151 4695 generic.go:334] "Generic (PLEG): container finished" podID="fc485407-1013-4868-83de-ec51c4cdb030" containerID="5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd" exitCode=0 Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.503222 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" event={"ID":"fc485407-1013-4868-83de-ec51c4cdb030","Type":"ContainerDied","Data":"5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.503250 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" event={"ID":"fc485407-1013-4868-83de-ec51c4cdb030","Type":"ContainerDied","Data":"775b4bcaea9b28a1ecc028ea46571611bec91d9d1f03901bbde478d2e0b357ec"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.503305 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-dzxm6" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.505107 4695 scope.go:117] "RemoveContainer" containerID="a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.509856 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838cffa7-c983-4531-9b48-8397076df516-kube-api-access-5gxdm" (OuterVolumeSpecName: "kube-api-access-5gxdm") pod "838cffa7-c983-4531-9b48-8397076df516" (UID: "838cffa7-c983-4531-9b48-8397076df516"). InnerVolumeSpecName "kube-api-access-5gxdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.511296 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9p6q" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.511280 4695 generic.go:334] "Generic (PLEG): container finished" podID="4ede3477-0b5e-43ba-a074-244304777695" containerID="2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3" exitCode=0 Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.512104 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9p6q" event={"ID":"4ede3477-0b5e-43ba-a074-244304777695","Type":"ContainerDied","Data":"2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.512627 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9p6q" event={"ID":"4ede3477-0b5e-43ba-a074-244304777695","Type":"ContainerDied","Data":"c94d82a937456a1b0f51ac6e8b110bf68211778a118b3eb116150262d59c27da"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.521084 4695 generic.go:334] "Generic (PLEG): container finished" podID="1f6308de-c770-4097-807a-ea8d1fd17151" containerID="35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9" exitCode=0 Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.521148 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84jk7" event={"ID":"1f6308de-c770-4097-807a-ea8d1fd17151","Type":"ContainerDied","Data":"35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.521179 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84jk7" event={"ID":"1f6308de-c770-4097-807a-ea8d1fd17151","Type":"ContainerDied","Data":"230d3c9e861599fce88ef1bfb5ecd348773ced85eaa83bf3b129d97993c160ab"} Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.521291 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84jk7" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.525070 4695 scope.go:117] "RemoveContainer" containerID="de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.542685 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f6308de-c770-4097-807a-ea8d1fd17151" (UID: "1f6308de-c770-4097-807a-ea8d1fd17151"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.547866 4695 scope.go:117] "RemoveContainer" containerID="b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.549545 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6\": container with ID starting with b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6 not found: ID does not exist" containerID="b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.549592 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6"} err="failed to get container status \"b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6\": rpc error: code = NotFound desc = could not find container \"b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6\": container with ID starting with b931b6e82c21aa3177d9dc3a7df3f07397fea3aa010b36bc2e9b815eaad4b6e6 not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.549623 4695 scope.go:117] "RemoveContainer" containerID="a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.550142 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f\": container with ID starting with a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f not found: ID does not exist" containerID="a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.550177 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f"} err="failed to get container status \"a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f\": rpc error: code = NotFound desc = could not find container \"a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f\": container with ID starting with a2d5d0766df04ddeea85dbd56aa933db62868df4d3b85b4d3eacefe70be8853f not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.550198 4695 scope.go:117] "RemoveContainer" containerID="de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.550586 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d\": container with ID starting with de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d not found: ID does not exist" containerID="de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.550637 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d"} err="failed to get container status \"de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d\": rpc error: code = NotFound desc = could not find container \"de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d\": container with ID starting with de8b40d133546ac553928f5c093e727d3a8953b93e57a7fe4c6aac22ff9ecc7d not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.550669 4695 scope.go:117] "RemoveContainer" containerID="143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.557427 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "838cffa7-c983-4531-9b48-8397076df516" (UID: "838cffa7-c983-4531-9b48-8397076df516"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.568965 4695 scope.go:117] "RemoveContainer" containerID="7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.578878 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-operator-metrics\") pod \"fc485407-1013-4868-83de-ec51c4cdb030\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.578973 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-trusted-ca\") pod \"fc485407-1013-4868-83de-ec51c4cdb030\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579057 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-catalog-content\") pod \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579098 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w8jj\" (UniqueName: \"kubernetes.io/projected/fc485407-1013-4868-83de-ec51c4cdb030-kube-api-access-4w8jj\") pod \"fc485407-1013-4868-83de-ec51c4cdb030\" (UID: \"fc485407-1013-4868-83de-ec51c4cdb030\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579165 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-utilities\") pod \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579196 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6j9v\" (UniqueName: \"kubernetes.io/projected/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-kube-api-access-r6j9v\") pod \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\" (UID: \"7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb\") " Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579472 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df292\" (UniqueName: \"kubernetes.io/projected/4ede3477-0b5e-43ba-a074-244304777695-kube-api-access-df292\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579488 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579500 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579512 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579524 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/838cffa7-c983-4531-9b48-8397076df516-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579535 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f6308de-c770-4097-807a-ea8d1fd17151-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579547 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gxdm\" (UniqueName: \"kubernetes.io/projected/838cffa7-c983-4531-9b48-8397076df516-kube-api-access-5gxdm\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.579558 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr95c\" (UniqueName: \"kubernetes.io/projected/1f6308de-c770-4097-807a-ea8d1fd17151-kube-api-access-vr95c\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.580910 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-utilities" (OuterVolumeSpecName: "utilities") pod "7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" (UID: "7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.581012 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fc485407-1013-4868-83de-ec51c4cdb030" (UID: "fc485407-1013-4868-83de-ec51c4cdb030"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.583205 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fc485407-1013-4868-83de-ec51c4cdb030" (UID: "fc485407-1013-4868-83de-ec51c4cdb030"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.583585 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-kube-api-access-r6j9v" (OuterVolumeSpecName: "kube-api-access-r6j9v") pod "7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" (UID: "7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb"). InnerVolumeSpecName "kube-api-access-r6j9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.584833 4695 scope.go:117] "RemoveContainer" containerID="440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.585339 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc485407-1013-4868-83de-ec51c4cdb030-kube-api-access-4w8jj" (OuterVolumeSpecName: "kube-api-access-4w8jj") pod "fc485407-1013-4868-83de-ec51c4cdb030" (UID: "fc485407-1013-4868-83de-ec51c4cdb030"). InnerVolumeSpecName "kube-api-access-4w8jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.601626 4695 scope.go:117] "RemoveContainer" containerID="143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.602125 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40\": container with ID starting with 143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40 not found: ID does not exist" containerID="143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.602164 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40"} err="failed to get container status \"143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40\": rpc error: code = NotFound desc = could not find container \"143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40\": container with ID starting with 143d023b6c0b973b62819930558c55fe25122becc65bba39708b704f28771c40 not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.602200 4695 scope.go:117] "RemoveContainer" containerID="7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.602466 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df\": container with ID starting with 7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df not found: ID does not exist" containerID="7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.602492 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df"} err="failed to get container status \"7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df\": rpc error: code = NotFound desc = could not find container \"7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df\": container with ID starting with 7af995eb3d687179f6d8f1daee040ec9a8913ccf78caf58419b09c7f77f0f1df not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.602509 4695 scope.go:117] "RemoveContainer" containerID="440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.602740 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736\": container with ID starting with 440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736 not found: ID does not exist" containerID="440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.602768 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736"} err="failed to get container status \"440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736\": rpc error: code = NotFound desc = could not find container \"440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736\": container with ID starting with 440ebe2051df7bce24ed16401ec2238f1d28172ffe4894aa111bf3fc772b8736 not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.602786 4695 scope.go:117] "RemoveContainer" containerID="5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.602822 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" (UID: "7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.615037 4695 scope.go:117] "RemoveContainer" containerID="5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.615425 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd\": container with ID starting with 5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd not found: ID does not exist" containerID="5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.615455 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd"} err="failed to get container status \"5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd\": rpc error: code = NotFound desc = could not find container \"5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd\": container with ID starting with 5a6ddd42a35be3245cf0199a21744c1e69478d98a6e1daa4dbb2b71b263b8bdd not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.615476 4695 scope.go:117] "RemoveContainer" containerID="2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.615990 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ede3477-0b5e-43ba-a074-244304777695" (UID: "4ede3477-0b5e-43ba-a074-244304777695"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.628242 4695 scope.go:117] "RemoveContainer" containerID="7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.646633 4695 scope.go:117] "RemoveContainer" containerID="071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.659004 4695 scope.go:117] "RemoveContainer" containerID="2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.659393 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3\": container with ID starting with 2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3 not found: ID does not exist" containerID="2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.659434 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3"} err="failed to get container status \"2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3\": rpc error: code = NotFound desc = could not find container \"2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3\": container with ID starting with 2360ea96e39bdb7f4ffba9d808425d822d560e282077dfb59c7120fe13b39fa3 not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.659468 4695 scope.go:117] "RemoveContainer" containerID="7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.659867 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf\": container with ID starting with 7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf not found: ID does not exist" containerID="7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.659896 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf"} err="failed to get container status \"7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf\": rpc error: code = NotFound desc = could not find container \"7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf\": container with ID starting with 7e91ae70908ba2fc855e58101897ae866ba78dcc4e5d8ef93ffafe40d72efecf not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.659919 4695 scope.go:117] "RemoveContainer" containerID="071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.660172 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80\": container with ID starting with 071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80 not found: ID does not exist" containerID="071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.660285 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80"} err="failed to get container status \"071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80\": rpc error: code = NotFound desc = could not find container \"071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80\": container with ID starting with 071bfea2c64a73085976be5abe49c25130611f2db7f1644b3fd700f6f9d9df80 not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.660393 4695 scope.go:117] "RemoveContainer" containerID="35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.671464 4695 scope.go:117] "RemoveContainer" containerID="f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.681211 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.681451 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6j9v\" (UniqueName: \"kubernetes.io/projected/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-kube-api-access-r6j9v\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.681529 4695 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.681625 4695 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fc485407-1013-4868-83de-ec51c4cdb030-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.681757 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.681829 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w8jj\" (UniqueName: \"kubernetes.io/projected/fc485407-1013-4868-83de-ec51c4cdb030-kube-api-access-4w8jj\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.681911 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ede3477-0b5e-43ba-a074-244304777695-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.686042 4695 scope.go:117] "RemoveContainer" containerID="a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.701045 4695 scope.go:117] "RemoveContainer" containerID="35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.701469 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9\": container with ID starting with 35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9 not found: ID does not exist" containerID="35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.701509 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9"} err="failed to get container status \"35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9\": rpc error: code = NotFound desc = could not find container \"35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9\": container with ID starting with 35afaa7c938a42ae8f3f1ded39546a7cb83fdbb308093dcc781d68814994fad9 not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.701537 4695 scope.go:117] "RemoveContainer" containerID="f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.701818 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9\": container with ID starting with f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9 not found: ID does not exist" containerID="f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.701849 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9"} err="failed to get container status \"f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9\": rpc error: code = NotFound desc = could not find container \"f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9\": container with ID starting with f5cf526ad1ba0c68ad3fa4bcb9e34e71bfca65657dd390d8c01ea75233c6b0a9 not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.701867 4695 scope.go:117] "RemoveContainer" containerID="a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a" Nov 26 13:29:55 crc kubenswrapper[4695]: E1126 13:29:55.702111 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a\": container with ID starting with a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a not found: ID does not exist" containerID="a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.702138 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a"} err="failed to get container status \"a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a\": rpc error: code = NotFound desc = could not find container \"a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a\": container with ID starting with a24f45032ff2f45635041c2d63d0f3e619710c77969c36bebee0b9a9aa8b1c3a not found: ID does not exist" Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.796966 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xxs2"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.806025 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbmxm"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.809594 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sbmxm"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.831060 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9tvmf"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.836312 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9tvmf"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.840167 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzxm6"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.843803 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-dzxm6"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.849257 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9p6q"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.851128 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9p6q"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.859312 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84jk7"] Nov 26 13:29:55 crc kubenswrapper[4695]: I1126 13:29:55.864270 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84jk7"] Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.285767 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-twktp"] Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.285962 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.285974 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.285983 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.285991 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286004 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286010 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286019 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286024 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286032 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286038 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286044 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286050 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286058 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286063 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286072 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286078 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="extract-content" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286085 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286090 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286099 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286106 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286115 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc485407-1013-4868-83de-ec51c4cdb030" containerName="marketplace-operator" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286120 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc485407-1013-4868-83de-ec51c4cdb030" containerName="marketplace-operator" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286128 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286134 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="extract-utilities" Nov 26 13:29:56 crc kubenswrapper[4695]: E1126 13:29:56.286144 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286150 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286226 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286238 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286248 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ede3477-0b5e-43ba-a074-244304777695" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286256 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc485407-1013-4868-83de-ec51c4cdb030" containerName="marketplace-operator" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286265 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="838cffa7-c983-4531-9b48-8397076df516" containerName="registry-server" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.286930 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.295741 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.303711 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twktp"] Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.390592 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-utilities\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.390791 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frt9s\" (UniqueName: \"kubernetes.io/projected/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-kube-api-access-frt9s\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.390885 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-catalog-content\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.491879 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frt9s\" (UniqueName: \"kubernetes.io/projected/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-kube-api-access-frt9s\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.491947 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-catalog-content\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.492001 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-utilities\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.492575 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-utilities\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.492635 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-catalog-content\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.514314 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frt9s\" (UniqueName: \"kubernetes.io/projected/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-kube-api-access-frt9s\") pod \"redhat-marketplace-twktp\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.530593 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" event={"ID":"27975591-c7e9-4e85-96a9-2a1280f6e1f3","Type":"ContainerStarted","Data":"3c5b9eaeab40cc2c73dba5c0cfce9d61d02cb323b06ea6586dd3467615a6e207"} Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.530931 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" event={"ID":"27975591-c7e9-4e85-96a9-2a1280f6e1f3","Type":"ContainerStarted","Data":"b19d8db895d9e57bbf96625aef17eae9a28c066d864fd8c5a1f9f5181f59c784"} Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.530958 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.536852 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.564397 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" podStartSLOduration=2.564382233 podStartE2EDuration="2.564382233s" podCreationTimestamp="2025-11-26 13:29:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:29:56.562703029 +0000 UTC m=+380.198528111" watchObservedRunningTime="2025-11-26 13:29:56.564382233 +0000 UTC m=+380.200207315" Nov 26 13:29:56 crc kubenswrapper[4695]: I1126 13:29:56.633789 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.051281 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-twktp"] Nov 26 13:29:57 crc kubenswrapper[4695]: W1126 13:29:57.062513 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71e69014_c2bb_4ab6_88ce_a318d11a0b1c.slice/crio-0f57f107c680042f3b8a2a2b2c770da1a24674c45bf15084043a91bcc8b98893 WatchSource:0}: Error finding container 0f57f107c680042f3b8a2a2b2c770da1a24674c45bf15084043a91bcc8b98893: Status 404 returned error can't find the container with id 0f57f107c680042f3b8a2a2b2c770da1a24674c45bf15084043a91bcc8b98893 Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.185987 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f6308de-c770-4097-807a-ea8d1fd17151" path="/var/lib/kubelet/pods/1f6308de-c770-4097-807a-ea8d1fd17151/volumes" Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.186925 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ede3477-0b5e-43ba-a074-244304777695" path="/var/lib/kubelet/pods/4ede3477-0b5e-43ba-a074-244304777695/volumes" Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.187518 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb" path="/var/lib/kubelet/pods/7b90bd52-9ae9-4e2c-b32f-1b4a2df4ecdb/volumes" Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.188469 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838cffa7-c983-4531-9b48-8397076df516" path="/var/lib/kubelet/pods/838cffa7-c983-4531-9b48-8397076df516/volumes" Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.190182 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc485407-1013-4868-83de-ec51c4cdb030" path="/var/lib/kubelet/pods/fc485407-1013-4868-83de-ec51c4cdb030/volumes" Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.546055 4695 generic.go:334] "Generic (PLEG): container finished" podID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerID="979b6b2ccb3ccf30c0e011f642ac8a26a2f267fed17a003e53736dead9b8456e" exitCode=0 Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.547092 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twktp" event={"ID":"71e69014-c2bb-4ab6-88ce-a318d11a0b1c","Type":"ContainerDied","Data":"979b6b2ccb3ccf30c0e011f642ac8a26a2f267fed17a003e53736dead9b8456e"} Nov 26 13:29:57 crc kubenswrapper[4695]: I1126 13:29:57.547115 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twktp" event={"ID":"71e69014-c2bb-4ab6-88ce-a318d11a0b1c","Type":"ContainerStarted","Data":"0f57f107c680042f3b8a2a2b2c770da1a24674c45bf15084043a91bcc8b98893"} Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.089200 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dnwwr"] Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.092755 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.098547 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.102616 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnwwr"] Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.219394 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rmw7\" (UniqueName: \"kubernetes.io/projected/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-kube-api-access-2rmw7\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.219459 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-catalog-content\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.219492 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-utilities\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.320606 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rmw7\" (UniqueName: \"kubernetes.io/projected/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-kube-api-access-2rmw7\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.320707 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-catalog-content\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.320752 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-utilities\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.321463 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-utilities\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.321960 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-catalog-content\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.345891 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rmw7\" (UniqueName: \"kubernetes.io/projected/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-kube-api-access-2rmw7\") pod \"redhat-operators-dnwwr\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.425855 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.554148 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twktp" event={"ID":"71e69014-c2bb-4ab6-88ce-a318d11a0b1c","Type":"ContainerStarted","Data":"cf5ee6598d6630edd4ba3c8b1baa7e8b77f8b7bb9c6e9b54cef2c7a2f5c0e6ce"} Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.685409 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jvcb5"] Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.687176 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.690775 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.695782 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvcb5"] Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.829381 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bwgl\" (UniqueName: \"kubernetes.io/projected/812ad6aa-bb31-433e-886d-2518da5bb809-kube-api-access-7bwgl\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.829826 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-catalog-content\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.829869 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-utilities\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.907600 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dnwwr"] Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.931551 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bwgl\" (UniqueName: \"kubernetes.io/projected/812ad6aa-bb31-433e-886d-2518da5bb809-kube-api-access-7bwgl\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.931621 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-catalog-content\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.931656 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-utilities\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.932177 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-utilities\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.935630 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-catalog-content\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:58 crc kubenswrapper[4695]: I1126 13:29:58.956647 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bwgl\" (UniqueName: \"kubernetes.io/projected/812ad6aa-bb31-433e-886d-2518da5bb809-kube-api-access-7bwgl\") pod \"certified-operators-jvcb5\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.009923 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.211755 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jvcb5"] Nov 26 13:29:59 crc kubenswrapper[4695]: W1126 13:29:59.218257 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod812ad6aa_bb31_433e_886d_2518da5bb809.slice/crio-5f1ea4584917010e8b019bc2b82f9e984c02c878b9e637af694010f3539f1dd3 WatchSource:0}: Error finding container 5f1ea4584917010e8b019bc2b82f9e984c02c878b9e637af694010f3539f1dd3: Status 404 returned error can't find the container with id 5f1ea4584917010e8b019bc2b82f9e984c02c878b9e637af694010f3539f1dd3 Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.560822 4695 generic.go:334] "Generic (PLEG): container finished" podID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerID="fe27a0b1d1b1ec798595f639af9ab24a223845e00efe4d045e6aa4a78620c687" exitCode=0 Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.560918 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwwr" event={"ID":"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f","Type":"ContainerDied","Data":"fe27a0b1d1b1ec798595f639af9ab24a223845e00efe4d045e6aa4a78620c687"} Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.560949 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwwr" event={"ID":"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f","Type":"ContainerStarted","Data":"6e91e1256ff0e5a7b95fa277571cb3ab87db088a33f3f6675e5dd60a99f94ab6"} Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.563909 4695 generic.go:334] "Generic (PLEG): container finished" podID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerID="cf5ee6598d6630edd4ba3c8b1baa7e8b77f8b7bb9c6e9b54cef2c7a2f5c0e6ce" exitCode=0 Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.563959 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twktp" event={"ID":"71e69014-c2bb-4ab6-88ce-a318d11a0b1c","Type":"ContainerDied","Data":"cf5ee6598d6630edd4ba3c8b1baa7e8b77f8b7bb9c6e9b54cef2c7a2f5c0e6ce"} Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.566606 4695 generic.go:334] "Generic (PLEG): container finished" podID="812ad6aa-bb31-433e-886d-2518da5bb809" containerID="904846e5ff99f2d79afb278dbc4b18d5ccdd789c5b46704daac11cc3aa944548" exitCode=0 Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.566638 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvcb5" event={"ID":"812ad6aa-bb31-433e-886d-2518da5bb809","Type":"ContainerDied","Data":"904846e5ff99f2d79afb278dbc4b18d5ccdd789c5b46704daac11cc3aa944548"} Nov 26 13:29:59 crc kubenswrapper[4695]: I1126 13:29:59.566657 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvcb5" event={"ID":"812ad6aa-bb31-433e-886d-2518da5bb809","Type":"ContainerStarted","Data":"5f1ea4584917010e8b019bc2b82f9e984c02c878b9e637af694010f3539f1dd3"} Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.167300 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx"] Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.168067 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.170299 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.171166 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.182551 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx"] Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.354987 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/739d1e68-f65b-49d5-8ad6-4feff696f45a-config-volume\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.355095 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwbq7\" (UniqueName: \"kubernetes.io/projected/739d1e68-f65b-49d5-8ad6-4feff696f45a-kube-api-access-mwbq7\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.355124 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/739d1e68-f65b-49d5-8ad6-4feff696f45a-secret-volume\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.456036 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwbq7\" (UniqueName: \"kubernetes.io/projected/739d1e68-f65b-49d5-8ad6-4feff696f45a-kube-api-access-mwbq7\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.456083 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/739d1e68-f65b-49d5-8ad6-4feff696f45a-secret-volume\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.456117 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/739d1e68-f65b-49d5-8ad6-4feff696f45a-config-volume\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.457074 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/739d1e68-f65b-49d5-8ad6-4feff696f45a-config-volume\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.465000 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/739d1e68-f65b-49d5-8ad6-4feff696f45a-secret-volume\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.483507 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwbq7\" (UniqueName: \"kubernetes.io/projected/739d1e68-f65b-49d5-8ad6-4feff696f45a-kube-api-access-mwbq7\") pod \"collect-profiles-29402730-jx7mx\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.490667 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lxxzl"] Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.492055 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.492550 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.496486 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.503854 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxxzl"] Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.580338 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvcb5" event={"ID":"812ad6aa-bb31-433e-886d-2518da5bb809","Type":"ContainerStarted","Data":"f041d1d23ff5846c8a107a05a352c20410c21fb3d90bcb1378e4f37bcb45fa15"} Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.596169 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twktp" event={"ID":"71e69014-c2bb-4ab6-88ce-a318d11a0b1c","Type":"ContainerStarted","Data":"d2d0e0316c71e568f52d26c5c248b8d4c29373d6ac367b65932294746b637088"} Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.621447 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-twktp" podStartSLOduration=2.196865888 podStartE2EDuration="4.621420313s" podCreationTimestamp="2025-11-26 13:29:56 +0000 UTC" firstStartedPulling="2025-11-26 13:29:57.54848977 +0000 UTC m=+381.184314852" lastFinishedPulling="2025-11-26 13:29:59.973044195 +0000 UTC m=+383.608869277" observedRunningTime="2025-11-26 13:30:00.619842453 +0000 UTC m=+384.255667535" watchObservedRunningTime="2025-11-26 13:30:00.621420313 +0000 UTC m=+384.257245395" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.659531 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-catalog-content\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.659927 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7cjh\" (UniqueName: \"kubernetes.io/projected/495bad95-1540-4a1a-b6bb-bfabf3683c2a-kube-api-access-r7cjh\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.659949 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-utilities\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.720459 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx"] Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.761132 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-catalog-content\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.761257 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7cjh\" (UniqueName: \"kubernetes.io/projected/495bad95-1540-4a1a-b6bb-bfabf3683c2a-kube-api-access-r7cjh\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.761278 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-utilities\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.761633 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-catalog-content\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.761731 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-utilities\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.785635 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7cjh\" (UniqueName: \"kubernetes.io/projected/495bad95-1540-4a1a-b6bb-bfabf3683c2a-kube-api-access-r7cjh\") pod \"community-operators-lxxzl\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:00 crc kubenswrapper[4695]: I1126 13:30:00.852699 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.082789 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lxxzl"] Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.603710 4695 generic.go:334] "Generic (PLEG): container finished" podID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerID="76bf4f4852ecd3c7a3aea66783b5d04519f3192718fe5673f0753e5eebd1023d" exitCode=0 Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.603822 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxxzl" event={"ID":"495bad95-1540-4a1a-b6bb-bfabf3683c2a","Type":"ContainerDied","Data":"76bf4f4852ecd3c7a3aea66783b5d04519f3192718fe5673f0753e5eebd1023d"} Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.604795 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxxzl" event={"ID":"495bad95-1540-4a1a-b6bb-bfabf3683c2a","Type":"ContainerStarted","Data":"fe382a978d12e8e31c0cf28ea3dc9d093f5d1162f0dc21ae3dfdffd756aac624"} Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.608688 4695 generic.go:334] "Generic (PLEG): container finished" podID="812ad6aa-bb31-433e-886d-2518da5bb809" containerID="f041d1d23ff5846c8a107a05a352c20410c21fb3d90bcb1378e4f37bcb45fa15" exitCode=0 Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.608751 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvcb5" event={"ID":"812ad6aa-bb31-433e-886d-2518da5bb809","Type":"ContainerDied","Data":"f041d1d23ff5846c8a107a05a352c20410c21fb3d90bcb1378e4f37bcb45fa15"} Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.612631 4695 generic.go:334] "Generic (PLEG): container finished" podID="739d1e68-f65b-49d5-8ad6-4feff696f45a" containerID="95aebe13a05e482934847e56eeac7a4c52acb95bb55428c18d105609644d8d5a" exitCode=0 Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.612767 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" event={"ID":"739d1e68-f65b-49d5-8ad6-4feff696f45a","Type":"ContainerDied","Data":"95aebe13a05e482934847e56eeac7a4c52acb95bb55428c18d105609644d8d5a"} Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.612793 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" event={"ID":"739d1e68-f65b-49d5-8ad6-4feff696f45a","Type":"ContainerStarted","Data":"aef7435af7dd681d518d388e1ca73ae9755a725e85a4ccc39b67c6e9357b8c2f"} Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.618279 4695 generic.go:334] "Generic (PLEG): container finished" podID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerID="d5382a362c3df366657086fa8f3965339f71abbd390e0aa772243441b932639b" exitCode=0 Nov 26 13:30:01 crc kubenswrapper[4695]: I1126 13:30:01.618417 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwwr" event={"ID":"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f","Type":"ContainerDied","Data":"d5382a362c3df366657086fa8f3965339f71abbd390e0aa772243441b932639b"} Nov 26 13:30:02 crc kubenswrapper[4695]: I1126 13:30:02.646913 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvcb5" event={"ID":"812ad6aa-bb31-433e-886d-2518da5bb809","Type":"ContainerStarted","Data":"9dbcbd0e8241da877762ff6724e326b05c69e34f8bcb920b9504a24fa62e1b31"} Nov 26 13:30:02 crc kubenswrapper[4695]: I1126 13:30:02.676315 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jvcb5" podStartSLOduration=2.075158044 podStartE2EDuration="4.676296942s" podCreationTimestamp="2025-11-26 13:29:58 +0000 UTC" firstStartedPulling="2025-11-26 13:29:59.568790722 +0000 UTC m=+383.204615804" lastFinishedPulling="2025-11-26 13:30:02.16992962 +0000 UTC m=+385.805754702" observedRunningTime="2025-11-26 13:30:02.670992352 +0000 UTC m=+386.306817434" watchObservedRunningTime="2025-11-26 13:30:02.676296942 +0000 UTC m=+386.312122024" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.003054 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.191216 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwbq7\" (UniqueName: \"kubernetes.io/projected/739d1e68-f65b-49d5-8ad6-4feff696f45a-kube-api-access-mwbq7\") pod \"739d1e68-f65b-49d5-8ad6-4feff696f45a\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.191285 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/739d1e68-f65b-49d5-8ad6-4feff696f45a-config-volume\") pod \"739d1e68-f65b-49d5-8ad6-4feff696f45a\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.191309 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/739d1e68-f65b-49d5-8ad6-4feff696f45a-secret-volume\") pod \"739d1e68-f65b-49d5-8ad6-4feff696f45a\" (UID: \"739d1e68-f65b-49d5-8ad6-4feff696f45a\") " Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.192089 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/739d1e68-f65b-49d5-8ad6-4feff696f45a-config-volume" (OuterVolumeSpecName: "config-volume") pod "739d1e68-f65b-49d5-8ad6-4feff696f45a" (UID: "739d1e68-f65b-49d5-8ad6-4feff696f45a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.198822 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/739d1e68-f65b-49d5-8ad6-4feff696f45a-kube-api-access-mwbq7" (OuterVolumeSpecName: "kube-api-access-mwbq7") pod "739d1e68-f65b-49d5-8ad6-4feff696f45a" (UID: "739d1e68-f65b-49d5-8ad6-4feff696f45a"). InnerVolumeSpecName "kube-api-access-mwbq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.204022 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/739d1e68-f65b-49d5-8ad6-4feff696f45a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "739d1e68-f65b-49d5-8ad6-4feff696f45a" (UID: "739d1e68-f65b-49d5-8ad6-4feff696f45a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.293059 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwbq7\" (UniqueName: \"kubernetes.io/projected/739d1e68-f65b-49d5-8ad6-4feff696f45a-kube-api-access-mwbq7\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.293104 4695 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/739d1e68-f65b-49d5-8ad6-4feff696f45a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.293117 4695 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/739d1e68-f65b-49d5-8ad6-4feff696f45a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.657075 4695 generic.go:334] "Generic (PLEG): container finished" podID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerID="f1d69accabe6493cbdfde9c1ef460dc3296b15d14874f3cd289240528954879f" exitCode=0 Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.658994 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" Nov 26 13:30:03 crc kubenswrapper[4695]: I1126 13:30:03.675540 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dnwwr" podStartSLOduration=2.627307202 podStartE2EDuration="5.675523503s" podCreationTimestamp="2025-11-26 13:29:58 +0000 UTC" firstStartedPulling="2025-11-26 13:29:59.562246653 +0000 UTC m=+383.198071745" lastFinishedPulling="2025-11-26 13:30:02.610462964 +0000 UTC m=+386.246288046" observedRunningTime="2025-11-26 13:30:03.674891753 +0000 UTC m=+387.310716835" watchObservedRunningTime="2025-11-26 13:30:03.675523503 +0000 UTC m=+387.311348575" Nov 26 13:30:04 crc kubenswrapper[4695]: E1126 13:30:04.582182 4695 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.42s" Nov 26 13:30:04 crc kubenswrapper[4695]: I1126 13:30:04.582234 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwwr" event={"ID":"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f","Type":"ContainerStarted","Data":"170822a7a1f0f9261ef6d123ee4ccfa211e69e221eba47b10006b5fbe68af343"} Nov 26 13:30:04 crc kubenswrapper[4695]: I1126 13:30:04.582323 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxxzl" event={"ID":"495bad95-1540-4a1a-b6bb-bfabf3683c2a","Type":"ContainerDied","Data":"f1d69accabe6493cbdfde9c1ef460dc3296b15d14874f3cd289240528954879f"} Nov 26 13:30:04 crc kubenswrapper[4695]: I1126 13:30:04.582371 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx" event={"ID":"739d1e68-f65b-49d5-8ad6-4feff696f45a","Type":"ContainerDied","Data":"aef7435af7dd681d518d388e1ca73ae9755a725e85a4ccc39b67c6e9357b8c2f"} Nov 26 13:30:04 crc kubenswrapper[4695]: I1126 13:30:04.582544 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aef7435af7dd681d518d388e1ca73ae9755a725e85a4ccc39b67c6e9357b8c2f" Nov 26 13:30:05 crc kubenswrapper[4695]: I1126 13:30:05.677144 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxxzl" event={"ID":"495bad95-1540-4a1a-b6bb-bfabf3683c2a","Type":"ContainerStarted","Data":"4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5"} Nov 26 13:30:05 crc kubenswrapper[4695]: I1126 13:30:05.704925 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lxxzl" podStartSLOduration=2.693970268 podStartE2EDuration="5.704906316s" podCreationTimestamp="2025-11-26 13:30:00 +0000 UTC" firstStartedPulling="2025-11-26 13:30:01.605396826 +0000 UTC m=+385.241221918" lastFinishedPulling="2025-11-26 13:30:04.616332884 +0000 UTC m=+388.252157966" observedRunningTime="2025-11-26 13:30:05.702397876 +0000 UTC m=+389.338222968" watchObservedRunningTime="2025-11-26 13:30:05.704906316 +0000 UTC m=+389.340731398" Nov 26 13:30:06 crc kubenswrapper[4695]: I1126 13:30:06.396687 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:30:06 crc kubenswrapper[4695]: I1126 13:30:06.396774 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:30:06 crc kubenswrapper[4695]: I1126 13:30:06.634203 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:30:06 crc kubenswrapper[4695]: I1126 13:30:06.636196 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:30:06 crc kubenswrapper[4695]: I1126 13:30:06.686601 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:30:06 crc kubenswrapper[4695]: I1126 13:30:06.738782 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:30:08 crc kubenswrapper[4695]: I1126 13:30:08.426613 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:30:08 crc kubenswrapper[4695]: I1126 13:30:08.426990 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:30:09 crc kubenswrapper[4695]: I1126 13:30:09.010795 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:30:09 crc kubenswrapper[4695]: I1126 13:30:09.011584 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:30:09 crc kubenswrapper[4695]: I1126 13:30:09.065154 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:30:09 crc kubenswrapper[4695]: I1126 13:30:09.485478 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dnwwr" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="registry-server" probeResult="failure" output=< Nov 26 13:30:09 crc kubenswrapper[4695]: timeout: failed to connect service ":50051" within 1s Nov 26 13:30:09 crc kubenswrapper[4695]: > Nov 26 13:30:09 crc kubenswrapper[4695]: I1126 13:30:09.743575 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:30:10 crc kubenswrapper[4695]: I1126 13:30:10.853504 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:10 crc kubenswrapper[4695]: I1126 13:30:10.853561 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:10 crc kubenswrapper[4695]: I1126 13:30:10.903746 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:11 crc kubenswrapper[4695]: I1126 13:30:11.767041 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:30:18 crc kubenswrapper[4695]: I1126 13:30:18.482647 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:30:18 crc kubenswrapper[4695]: I1126 13:30:18.540370 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.396867 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.397545 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.397608 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.398539 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"622882533bb496b4f2a02e17a6efe426030229a91b2dd1f8012fea475a41c1c7"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.398696 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://622882533bb496b4f2a02e17a6efe426030229a91b2dd1f8012fea475a41c1c7" gracePeriod=600 Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.886972 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="622882533bb496b4f2a02e17a6efe426030229a91b2dd1f8012fea475a41c1c7" exitCode=0 Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.887060 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"622882533bb496b4f2a02e17a6efe426030229a91b2dd1f8012fea475a41c1c7"} Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.887891 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"ded9ef810e781283318e5bdddf519279febd066b4df707f4e1fff9dc7f532238"} Nov 26 13:30:36 crc kubenswrapper[4695]: I1126 13:30:36.887961 4695 scope.go:117] "RemoveContainer" containerID="c27f2562f79eb7d96a989ad4f828c4627436a2fb9f56723093b4927c45f73d24" Nov 26 13:32:36 crc kubenswrapper[4695]: I1126 13:32:36.397327 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:32:36 crc kubenswrapper[4695]: I1126 13:32:36.398951 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:33:06 crc kubenswrapper[4695]: I1126 13:33:06.397167 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:33:06 crc kubenswrapper[4695]: I1126 13:33:06.397894 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.397476 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.398165 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.398229 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.398855 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ded9ef810e781283318e5bdddf519279febd066b4df707f4e1fff9dc7f532238"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.398928 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://ded9ef810e781283318e5bdddf519279febd066b4df707f4e1fff9dc7f532238" gracePeriod=600 Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.995628 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="ded9ef810e781283318e5bdddf519279febd066b4df707f4e1fff9dc7f532238" exitCode=0 Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.995728 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"ded9ef810e781283318e5bdddf519279febd066b4df707f4e1fff9dc7f532238"} Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.996368 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"84b7005908bcbacbd029ad98335f5091a83ffcb04f71e35adcdf1c55fac6fce2"} Nov 26 13:33:36 crc kubenswrapper[4695]: I1126 13:33:36.996396 4695 scope.go:117] "RemoveContainer" containerID="622882533bb496b4f2a02e17a6efe426030229a91b2dd1f8012fea475a41c1c7" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.059691 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bms5v"] Nov 26 13:34:27 crc kubenswrapper[4695]: E1126 13:34:27.060319 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="739d1e68-f65b-49d5-8ad6-4feff696f45a" containerName="collect-profiles" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.060331 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="739d1e68-f65b-49d5-8ad6-4feff696f45a" containerName="collect-profiles" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.060430 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="739d1e68-f65b-49d5-8ad6-4feff696f45a" containerName="collect-profiles" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.060759 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.077526 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bms5v"] Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.192665 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762c75b3-6a64-457d-87dc-58d78f00cb97-registry-certificates\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.192705 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762c75b3-6a64-457d-87dc-58d78f00cb97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.192734 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwd9n\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-kube-api-access-lwd9n\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.192766 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-registry-tls\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.192952 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762c75b3-6a64-457d-87dc-58d78f00cb97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.193017 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.193078 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-bound-sa-token\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.193110 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762c75b3-6a64-457d-87dc-58d78f00cb97-trusted-ca\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.215093 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.296767 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762c75b3-6a64-457d-87dc-58d78f00cb97-registry-certificates\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.296819 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762c75b3-6a64-457d-87dc-58d78f00cb97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.296846 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwd9n\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-kube-api-access-lwd9n\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.296875 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-registry-tls\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.296914 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762c75b3-6a64-457d-87dc-58d78f00cb97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.296946 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-bound-sa-token\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.296964 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762c75b3-6a64-457d-87dc-58d78f00cb97-trusted-ca\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.297912 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/762c75b3-6a64-457d-87dc-58d78f00cb97-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.298263 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/762c75b3-6a64-457d-87dc-58d78f00cb97-registry-certificates\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.298311 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/762c75b3-6a64-457d-87dc-58d78f00cb97-trusted-ca\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.302757 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/762c75b3-6a64-457d-87dc-58d78f00cb97-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.302777 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-registry-tls\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.318260 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwd9n\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-kube-api-access-lwd9n\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.320706 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/762c75b3-6a64-457d-87dc-58d78f00cb97-bound-sa-token\") pod \"image-registry-66df7c8f76-bms5v\" (UID: \"762c75b3-6a64-457d-87dc-58d78f00cb97\") " pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.376196 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:27 crc kubenswrapper[4695]: I1126 13:34:27.574334 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bms5v"] Nov 26 13:34:28 crc kubenswrapper[4695]: I1126 13:34:28.304802 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" event={"ID":"762c75b3-6a64-457d-87dc-58d78f00cb97","Type":"ContainerStarted","Data":"769e1f0225dbab20e7310d02a9f529238f0fe740a6835c71408169e38c463b4b"} Nov 26 13:34:28 crc kubenswrapper[4695]: I1126 13:34:28.305169 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:28 crc kubenswrapper[4695]: I1126 13:34:28.305190 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" event={"ID":"762c75b3-6a64-457d-87dc-58d78f00cb97","Type":"ContainerStarted","Data":"9c1d50d327e5e058d38c0981813264447ad873a7be99dd665c8667db3902e362"} Nov 26 13:34:28 crc kubenswrapper[4695]: I1126 13:34:28.328225 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" podStartSLOduration=1.328203633 podStartE2EDuration="1.328203633s" podCreationTimestamp="2025-11-26 13:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:34:28.326243521 +0000 UTC m=+651.962068623" watchObservedRunningTime="2025-11-26 13:34:28.328203633 +0000 UTC m=+651.964028725" Nov 26 13:34:47 crc kubenswrapper[4695]: I1126 13:34:47.380808 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-bms5v" Nov 26 13:34:47 crc kubenswrapper[4695]: I1126 13:34:47.443186 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dl9mv"] Nov 26 13:35:12 crc kubenswrapper[4695]: I1126 13:35:12.488193 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" podUID="bbdc37eb-6673-45f1-8d42-ac1e51e041a3" containerName="registry" containerID="cri-o://21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87" gracePeriod=30 Nov 26 13:35:12 crc kubenswrapper[4695]: I1126 13:35:12.840460 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.004334 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.004730 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-bound-sa-token\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.004795 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-ca-trust-extracted\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.004858 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-certificates\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.004902 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-trusted-ca\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.004936 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-installation-pull-secrets\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.004989 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-tls\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.005040 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nchxc\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-kube-api-access-nchxc\") pod \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\" (UID: \"bbdc37eb-6673-45f1-8d42-ac1e51e041a3\") " Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.006102 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.006189 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.013024 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.017652 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-kube-api-access-nchxc" (OuterVolumeSpecName: "kube-api-access-nchxc") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "kube-api-access-nchxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.018043 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.018527 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.030632 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.032229 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bbdc37eb-6673-45f1-8d42-ac1e51e041a3" (UID: "bbdc37eb-6673-45f1-8d42-ac1e51e041a3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.108002 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.108089 4695 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.108112 4695 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.108133 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nchxc\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-kube-api-access-nchxc\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.108153 4695 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.108183 4695 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.108203 4695 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bbdc37eb-6673-45f1-8d42-ac1e51e041a3-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.563324 4695 generic.go:334] "Generic (PLEG): container finished" podID="bbdc37eb-6673-45f1-8d42-ac1e51e041a3" containerID="21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87" exitCode=0 Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.563464 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" event={"ID":"bbdc37eb-6673-45f1-8d42-ac1e51e041a3","Type":"ContainerDied","Data":"21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87"} Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.563478 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.563513 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-dl9mv" event={"ID":"bbdc37eb-6673-45f1-8d42-ac1e51e041a3","Type":"ContainerDied","Data":"9f69d419568bb4a673898b8dc211ef3f82ee7b332bb47dbbb5233c67d5f652df"} Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.563708 4695 scope.go:117] "RemoveContainer" containerID="21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.598047 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dl9mv"] Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.606328 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-dl9mv"] Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.616481 4695 scope.go:117] "RemoveContainer" containerID="21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87" Nov 26 13:35:13 crc kubenswrapper[4695]: E1126 13:35:13.617050 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87\": container with ID starting with 21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87 not found: ID does not exist" containerID="21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87" Nov 26 13:35:13 crc kubenswrapper[4695]: I1126 13:35:13.617097 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87"} err="failed to get container status \"21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87\": rpc error: code = NotFound desc = could not find container \"21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87\": container with ID starting with 21a35f0584b120a714309162e202512b18163def10b90e998f5af7c8950f0a87 not found: ID does not exist" Nov 26 13:35:15 crc kubenswrapper[4695]: I1126 13:35:15.170170 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbdc37eb-6673-45f1-8d42-ac1e51e041a3" path="/var/lib/kubelet/pods/bbdc37eb-6673-45f1-8d42-ac1e51e041a3/volumes" Nov 26 13:35:36 crc kubenswrapper[4695]: I1126 13:35:36.396306 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:35:36 crc kubenswrapper[4695]: I1126 13:35:36.396936 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.615792 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g67gn"] Nov 26 13:35:51 crc kubenswrapper[4695]: E1126 13:35:51.616600 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbdc37eb-6673-45f1-8d42-ac1e51e041a3" containerName="registry" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.616614 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbdc37eb-6673-45f1-8d42-ac1e51e041a3" containerName="registry" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.616702 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbdc37eb-6673-45f1-8d42-ac1e51e041a3" containerName="registry" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.617132 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.624738 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.625082 4695 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-7mcmj" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.625196 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.630320 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g67gn"] Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.642468 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dsj6h"] Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.643598 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dsj6h" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.645850 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-nczqg"] Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.646663 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.650023 4695 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-crl29" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.650199 4695 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wd8d2" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.666403 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-nczqg"] Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.670610 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dsj6h"] Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.716340 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpxv\" (UniqueName: \"kubernetes.io/projected/f548c3df-1cb9-4e28-af35-4471c3633b76-kube-api-access-fvpxv\") pod \"cert-manager-cainjector-7f985d654d-g67gn\" (UID: \"f548c3df-1cb9-4e28-af35-4471c3633b76\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.817446 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9x5n\" (UniqueName: \"kubernetes.io/projected/cff7ef71-26ea-4334-b5b0-9d6c931d6fff-kube-api-access-l9x5n\") pod \"cert-manager-5b446d88c5-dsj6h\" (UID: \"cff7ef71-26ea-4334-b5b0-9d6c931d6fff\") " pod="cert-manager/cert-manager-5b446d88c5-dsj6h" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.817543 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhdhz\" (UniqueName: \"kubernetes.io/projected/221fa061-b961-4fb4-b8bb-e280873ce253-kube-api-access-hhdhz\") pod \"cert-manager-webhook-5655c58dd6-nczqg\" (UID: \"221fa061-b961-4fb4-b8bb-e280873ce253\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.817590 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvpxv\" (UniqueName: \"kubernetes.io/projected/f548c3df-1cb9-4e28-af35-4471c3633b76-kube-api-access-fvpxv\") pod \"cert-manager-cainjector-7f985d654d-g67gn\" (UID: \"f548c3df-1cb9-4e28-af35-4471c3633b76\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.842018 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvpxv\" (UniqueName: \"kubernetes.io/projected/f548c3df-1cb9-4e28-af35-4471c3633b76-kube-api-access-fvpxv\") pod \"cert-manager-cainjector-7f985d654d-g67gn\" (UID: \"f548c3df-1cb9-4e28-af35-4471c3633b76\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.918877 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9x5n\" (UniqueName: \"kubernetes.io/projected/cff7ef71-26ea-4334-b5b0-9d6c931d6fff-kube-api-access-l9x5n\") pod \"cert-manager-5b446d88c5-dsj6h\" (UID: \"cff7ef71-26ea-4334-b5b0-9d6c931d6fff\") " pod="cert-manager/cert-manager-5b446d88c5-dsj6h" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.919005 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhdhz\" (UniqueName: \"kubernetes.io/projected/221fa061-b961-4fb4-b8bb-e280873ce253-kube-api-access-hhdhz\") pod \"cert-manager-webhook-5655c58dd6-nczqg\" (UID: \"221fa061-b961-4fb4-b8bb-e280873ce253\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.935859 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.948755 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9x5n\" (UniqueName: \"kubernetes.io/projected/cff7ef71-26ea-4334-b5b0-9d6c931d6fff-kube-api-access-l9x5n\") pod \"cert-manager-5b446d88c5-dsj6h\" (UID: \"cff7ef71-26ea-4334-b5b0-9d6c931d6fff\") " pod="cert-manager/cert-manager-5b446d88c5-dsj6h" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.949055 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhdhz\" (UniqueName: \"kubernetes.io/projected/221fa061-b961-4fb4-b8bb-e280873ce253-kube-api-access-hhdhz\") pod \"cert-manager-webhook-5655c58dd6-nczqg\" (UID: \"221fa061-b961-4fb4-b8bb-e280873ce253\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.960671 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-dsj6h" Nov 26 13:35:51 crc kubenswrapper[4695]: I1126 13:35:51.967678 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" Nov 26 13:35:52 crc kubenswrapper[4695]: I1126 13:35:52.231566 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g67gn"] Nov 26 13:35:52 crc kubenswrapper[4695]: I1126 13:35:52.246736 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:35:52 crc kubenswrapper[4695]: I1126 13:35:52.273811 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-nczqg"] Nov 26 13:35:52 crc kubenswrapper[4695]: W1126 13:35:52.286648 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod221fa061_b961_4fb4_b8bb_e280873ce253.slice/crio-380edff03ce4a0e3adc6d08f7b7ea8c8c92df8d1fd30474c8341653a93406805 WatchSource:0}: Error finding container 380edff03ce4a0e3adc6d08f7b7ea8c8c92df8d1fd30474c8341653a93406805: Status 404 returned error can't find the container with id 380edff03ce4a0e3adc6d08f7b7ea8c8c92df8d1fd30474c8341653a93406805 Nov 26 13:35:52 crc kubenswrapper[4695]: I1126 13:35:52.322653 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-dsj6h"] Nov 26 13:35:52 crc kubenswrapper[4695]: W1126 13:35:52.329676 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcff7ef71_26ea_4334_b5b0_9d6c931d6fff.slice/crio-ebd66da2d93dbcb8ce7137018620c1e3fa53a4a5ef08bea01d1b6f640b74961c WatchSource:0}: Error finding container ebd66da2d93dbcb8ce7137018620c1e3fa53a4a5ef08bea01d1b6f640b74961c: Status 404 returned error can't find the container with id ebd66da2d93dbcb8ce7137018620c1e3fa53a4a5ef08bea01d1b6f640b74961c Nov 26 13:35:52 crc kubenswrapper[4695]: I1126 13:35:52.809376 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dsj6h" event={"ID":"cff7ef71-26ea-4334-b5b0-9d6c931d6fff","Type":"ContainerStarted","Data":"ebd66da2d93dbcb8ce7137018620c1e3fa53a4a5ef08bea01d1b6f640b74961c"} Nov 26 13:35:52 crc kubenswrapper[4695]: I1126 13:35:52.810766 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" event={"ID":"f548c3df-1cb9-4e28-af35-4471c3633b76","Type":"ContainerStarted","Data":"9a71c6b5f09df92514036f2528fce483b66edd5b08aa44e7bb3c1ed47ac49a6f"} Nov 26 13:35:52 crc kubenswrapper[4695]: I1126 13:35:52.812374 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" event={"ID":"221fa061-b961-4fb4-b8bb-e280873ce253","Type":"ContainerStarted","Data":"380edff03ce4a0e3adc6d08f7b7ea8c8c92df8d1fd30474c8341653a93406805"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.661148 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qc7jt"] Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.662104 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-controller" containerID="cri-o://fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.662183 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="nbdb" containerID="cri-o://75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.662246 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="northd" containerID="cri-o://e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.662296 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.662334 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-node" containerID="cri-o://b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.662392 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-acl-logging" containerID="cri-o://bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.662564 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="sbdb" containerID="cri-o://cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.706131 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" containerID="cri-o://0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651" gracePeriod=30 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.864942 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" event={"ID":"221fa061-b961-4fb4-b8bb-e280873ce253","Type":"ContainerStarted","Data":"27388b93447a08781b91d2319b00556aa30ab4647925f74302913cfb281588e0"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.865283 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.866474 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-dsj6h" event={"ID":"cff7ef71-26ea-4334-b5b0-9d6c931d6fff","Type":"ContainerStarted","Data":"62f57700ca30e4ae2f7efa37a2518dfa036a84296b86b28908a49f5eef7b4ac7"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.868490 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovnkube-controller/3.log" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.870196 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovn-acl-logging/0.log" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.870971 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovn-controller/0.log" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.871410 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651" exitCode=0 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872448 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969" exitCode=0 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.871484 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872500 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872546 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872566 4695 scope.go:117] "RemoveContainer" containerID="fc99728f1cdb6135266aded70f104e59a617c2d60a7158bf60f7ed5739472ab5" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872467 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d" exitCode=0 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872655 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b" exitCode=143 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872672 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d" exitCode=143 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872705 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.872719 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.876240 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/2.log" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.876841 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/1.log" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.876898 4695 generic.go:334] "Generic (PLEG): container finished" podID="133aab88-6958-4575-aefd-c4675266edd5" containerID="d9b8fb0a2c9c23dba8b2b9dea6fb19868eeaee0f8b68596c22b1d94167eefaab" exitCode=2 Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.876995 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerDied","Data":"d9b8fb0a2c9c23dba8b2b9dea6fb19868eeaee0f8b68596c22b1d94167eefaab"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.877707 4695 scope.go:117] "RemoveContainer" containerID="d9b8fb0a2c9c23dba8b2b9dea6fb19868eeaee0f8b68596c22b1d94167eefaab" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.881449 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" event={"ID":"f548c3df-1cb9-4e28-af35-4471c3633b76","Type":"ContainerStarted","Data":"3969b975826b8a5984c2bf2b3b2957b1cea696459cb22817a0d78fcf8cd99bd4"} Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.884520 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" podStartSLOduration=2.18871617 podStartE2EDuration="10.884499542s" podCreationTimestamp="2025-11-26 13:35:51 +0000 UTC" firstStartedPulling="2025-11-26 13:35:52.288276208 +0000 UTC m=+735.924101290" lastFinishedPulling="2025-11-26 13:36:00.98405958 +0000 UTC m=+744.619884662" observedRunningTime="2025-11-26 13:36:01.881160936 +0000 UTC m=+745.516986018" watchObservedRunningTime="2025-11-26 13:36:01.884499542 +0000 UTC m=+745.520324624" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.900227 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-g67gn" podStartSLOduration=2.064869079 podStartE2EDuration="10.900203601s" podCreationTimestamp="2025-11-26 13:35:51 +0000 UTC" firstStartedPulling="2025-11-26 13:35:52.246297065 +0000 UTC m=+735.882122147" lastFinishedPulling="2025-11-26 13:36:01.081631587 +0000 UTC m=+744.717456669" observedRunningTime="2025-11-26 13:36:01.896758501 +0000 UTC m=+745.532583603" watchObservedRunningTime="2025-11-26 13:36:01.900203601 +0000 UTC m=+745.536028683" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.913232 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-dsj6h" podStartSLOduration=2.260871768 podStartE2EDuration="10.913209533s" podCreationTimestamp="2025-11-26 13:35:51 +0000 UTC" firstStartedPulling="2025-11-26 13:35:52.331755637 +0000 UTC m=+735.967580719" lastFinishedPulling="2025-11-26 13:36:00.984093402 +0000 UTC m=+744.619918484" observedRunningTime="2025-11-26 13:36:01.910749795 +0000 UTC m=+745.546574877" watchObservedRunningTime="2025-11-26 13:36:01.913209533 +0000 UTC m=+745.549034615" Nov 26 13:36:01 crc kubenswrapper[4695]: I1126 13:36:01.960300 4695 scope.go:117] "RemoveContainer" containerID="d28d4d1bc4915a353a21e800360614d08363697aa850f7fdec0f1aa9ee324c27" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.021027 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovn-acl-logging/0.log" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.021733 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovn-controller/0.log" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.022403 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.065663 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovn-node-metrics-cert\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074521 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-netd\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074566 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-systemd-units\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074588 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-node-log\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074605 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-slash\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074652 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw8dj\" (UniqueName: \"kubernetes.io/projected/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-kube-api-access-sw8dj\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074684 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-etc-openvswitch\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074710 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-ovn-kubernetes\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074734 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-env-overrides\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074766 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-var-lib-openvswitch\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074791 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-script-lib\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074812 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-openvswitch\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074839 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-systemd\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074862 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-log-socket\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074913 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-netns\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074938 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-bin\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074964 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.074988 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-ovn\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.075014 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-config\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.075041 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-kubelet\") pod \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\" (UID: \"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3\") " Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.075934 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.075973 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.075997 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-node-log" (OuterVolumeSpecName: "node-log") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.076018 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-slash" (OuterVolumeSpecName: "host-slash") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.076294 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.076363 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.076386 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.076737 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.076764 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.077058 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.077090 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.077506 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-log-socket" (OuterVolumeSpecName: "log-socket") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.077573 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.077627 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.077652 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.078029 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.078065 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.081738 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f9kwn"] Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.081999 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082033 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082044 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-node" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082051 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-node" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082066 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082075 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082085 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="nbdb" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082093 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="nbdb" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082108 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082115 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082124 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kubecfg-setup" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082131 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kubecfg-setup" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082142 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082149 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082157 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="sbdb" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082164 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="sbdb" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082174 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="northd" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082181 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="northd" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082191 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082200 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082212 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082219 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.082229 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-acl-logging" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082237 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-acl-logging" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082328 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-kube-api-access-sw8dj" (OuterVolumeSpecName: "kube-api-access-sw8dj") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "kube-api-access-sw8dj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082376 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="nbdb" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082472 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="sbdb" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082489 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-ovn-metrics" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082528 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082543 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="northd" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082553 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082564 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovn-acl-logging" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082571 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082581 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082639 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082652 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.082664 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="kube-rbac-proxy-node" Nov 26 13:36:02 crc kubenswrapper[4695]: E1126 13:36:02.083559 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.083579 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerName="ovnkube-controller" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.085035 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.085720 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.094554 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" (UID: "5fa56d8f-ad6a-4761-ad93-58a109b0a9a3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.176906 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-slash\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.176964 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-ovn\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.176987 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovnkube-config\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177024 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-log-socket\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177050 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-cni-bin\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177167 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-kubelet\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177228 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-env-overrides\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177250 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-run-netns\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177264 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-cni-netd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177445 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqjnd\" (UniqueName: \"kubernetes.io/projected/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-kube-api-access-vqjnd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177508 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177545 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177567 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovn-node-metrics-cert\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177597 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-etc-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177614 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177722 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-systemd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177749 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-node-log\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177776 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovnkube-script-lib\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177817 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-var-lib-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177840 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-systemd-units\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177904 4695 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177914 4695 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177926 4695 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177936 4695 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177945 4695 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-node-log\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177953 4695 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-slash\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177963 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw8dj\" (UniqueName: \"kubernetes.io/projected/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-kube-api-access-sw8dj\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177973 4695 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177983 4695 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.177991 4695 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178000 4695 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178009 4695 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178018 4695 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178027 4695 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178035 4695 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-log-socket\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178043 4695 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178052 4695 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178061 4695 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178069 4695 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.178077 4695 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.279490 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-log-socket\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.279624 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-log-socket\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.279822 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-cni-bin\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.279899 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-kubelet\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.279943 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-run-netns\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.279963 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-cni-netd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.279992 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-env-overrides\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280055 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqjnd\" (UniqueName: \"kubernetes.io/projected/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-kube-api-access-vqjnd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280070 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-kubelet\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280119 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280089 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280128 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-run-netns\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280176 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-cni-netd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280197 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovn-node-metrics-cert\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280238 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280266 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-etc-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280288 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280318 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280364 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280385 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-etc-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280705 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-systemd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280759 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-node-log\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280795 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-systemd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280810 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovnkube-script-lib\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280844 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-node-log\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280877 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-var-lib-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.280969 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-systemd-units\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281013 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-slash\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281056 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-var-lib-openvswitch\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281063 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-ovn\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281082 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-systemd-units\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281112 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovnkube-config\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281136 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-run-ovn\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281102 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-slash\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281706 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-env-overrides\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.281965 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovnkube-config\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.282114 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-host-cni-bin\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.282449 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovnkube-script-lib\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.285669 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-ovn-node-metrics-cert\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.299946 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqjnd\" (UniqueName: \"kubernetes.io/projected/98ef5da6-3da8-4add-9ca3-5e02deb0f1a3-kube-api-access-vqjnd\") pod \"ovnkube-node-f9kwn\" (UID: \"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.428637 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:02 crc kubenswrapper[4695]: W1126 13:36:02.444741 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ef5da6_3da8_4add_9ca3_5e02deb0f1a3.slice/crio-31f4304a6d201b07bf3986c11cae2c3b7af44e79b30cca8bf17d7b0f30136c35 WatchSource:0}: Error finding container 31f4304a6d201b07bf3986c11cae2c3b7af44e79b30cca8bf17d7b0f30136c35: Status 404 returned error can't find the container with id 31f4304a6d201b07bf3986c11cae2c3b7af44e79b30cca8bf17d7b0f30136c35 Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.892660 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovn-acl-logging/0.log" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.893851 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-qc7jt_5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/ovn-controller/0.log" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894330 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46" exitCode=0 Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894386 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16" exitCode=0 Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894399 4695 generic.go:334] "Generic (PLEG): container finished" podID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" containerID="e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3" exitCode=0 Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894403 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46"} Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894458 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16"} Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894493 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3"} Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894513 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" event={"ID":"5fa56d8f-ad6a-4761-ad93-58a109b0a9a3","Type":"ContainerDied","Data":"1659ff47a28f53355478b23d66dc31f0de1628d2a685a954220599019663989c"} Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894471 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qc7jt" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.894536 4695 scope.go:117] "RemoveContainer" containerID="0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.900329 4695 generic.go:334] "Generic (PLEG): container finished" podID="98ef5da6-3da8-4add-9ca3-5e02deb0f1a3" containerID="9c65cac06bea0e7c4a5469b21baaade7d502b8f39129c4f61cba99ff05a68329" exitCode=0 Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.900521 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerDied","Data":"9c65cac06bea0e7c4a5469b21baaade7d502b8f39129c4f61cba99ff05a68329"} Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.900585 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"31f4304a6d201b07bf3986c11cae2c3b7af44e79b30cca8bf17d7b0f30136c35"} Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.907825 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-hgtpx_133aab88-6958-4575-aefd-c4675266edd5/kube-multus/2.log" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.908025 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-hgtpx" event={"ID":"133aab88-6958-4575-aefd-c4675266edd5","Type":"ContainerStarted","Data":"587d740e15663d6e541c382a66c009e8e70e68c7bb89652de5c0f86dbdb91ea4"} Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.916190 4695 scope.go:117] "RemoveContainer" containerID="cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.936824 4695 scope.go:117] "RemoveContainer" containerID="75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.962929 4695 scope.go:117] "RemoveContainer" containerID="e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3" Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.982444 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qc7jt"] Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.982674 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qc7jt"] Nov 26 13:36:02 crc kubenswrapper[4695]: I1126 13:36:02.991175 4695 scope.go:117] "RemoveContainer" containerID="5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.006621 4695 scope.go:117] "RemoveContainer" containerID="b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.023977 4695 scope.go:117] "RemoveContainer" containerID="bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.053873 4695 scope.go:117] "RemoveContainer" containerID="fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.075322 4695 scope.go:117] "RemoveContainer" containerID="90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.096764 4695 scope.go:117] "RemoveContainer" containerID="0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.097213 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651\": container with ID starting with 0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651 not found: ID does not exist" containerID="0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.097253 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651"} err="failed to get container status \"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651\": rpc error: code = NotFound desc = could not find container \"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651\": container with ID starting with 0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.097280 4695 scope.go:117] "RemoveContainer" containerID="cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.097811 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\": container with ID starting with cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46 not found: ID does not exist" containerID="cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.097858 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46"} err="failed to get container status \"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\": rpc error: code = NotFound desc = could not find container \"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\": container with ID starting with cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.097893 4695 scope.go:117] "RemoveContainer" containerID="75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.098312 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\": container with ID starting with 75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16 not found: ID does not exist" containerID="75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.098338 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16"} err="failed to get container status \"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\": rpc error: code = NotFound desc = could not find container \"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\": container with ID starting with 75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.098402 4695 scope.go:117] "RemoveContainer" containerID="e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.098932 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\": container with ID starting with e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3 not found: ID does not exist" containerID="e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.098968 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3"} err="failed to get container status \"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\": rpc error: code = NotFound desc = could not find container \"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\": container with ID starting with e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.098988 4695 scope.go:117] "RemoveContainer" containerID="5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.099374 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\": container with ID starting with 5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969 not found: ID does not exist" containerID="5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.099406 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969"} err="failed to get container status \"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\": rpc error: code = NotFound desc = could not find container \"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\": container with ID starting with 5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.099428 4695 scope.go:117] "RemoveContainer" containerID="b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.099900 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\": container with ID starting with b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d not found: ID does not exist" containerID="b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.099968 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d"} err="failed to get container status \"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\": rpc error: code = NotFound desc = could not find container \"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\": container with ID starting with b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.100013 4695 scope.go:117] "RemoveContainer" containerID="bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.100610 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\": container with ID starting with bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b not found: ID does not exist" containerID="bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.100638 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b"} err="failed to get container status \"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\": rpc error: code = NotFound desc = could not find container \"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\": container with ID starting with bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.100658 4695 scope.go:117] "RemoveContainer" containerID="fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.100991 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\": container with ID starting with fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d not found: ID does not exist" containerID="fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.101026 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d"} err="failed to get container status \"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\": rpc error: code = NotFound desc = could not find container \"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\": container with ID starting with fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.101047 4695 scope.go:117] "RemoveContainer" containerID="90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3" Nov 26 13:36:03 crc kubenswrapper[4695]: E1126 13:36:03.101472 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\": container with ID starting with 90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3 not found: ID does not exist" containerID="90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.101499 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3"} err="failed to get container status \"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\": rpc error: code = NotFound desc = could not find container \"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\": container with ID starting with 90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.101518 4695 scope.go:117] "RemoveContainer" containerID="0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.101930 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651"} err="failed to get container status \"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651\": rpc error: code = NotFound desc = could not find container \"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651\": container with ID starting with 0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.101960 4695 scope.go:117] "RemoveContainer" containerID="cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.102433 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46"} err="failed to get container status \"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\": rpc error: code = NotFound desc = could not find container \"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\": container with ID starting with cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.102452 4695 scope.go:117] "RemoveContainer" containerID="75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.102851 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16"} err="failed to get container status \"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\": rpc error: code = NotFound desc = could not find container \"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\": container with ID starting with 75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.102887 4695 scope.go:117] "RemoveContainer" containerID="e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.103312 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3"} err="failed to get container status \"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\": rpc error: code = NotFound desc = could not find container \"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\": container with ID starting with e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.103336 4695 scope.go:117] "RemoveContainer" containerID="5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.103652 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969"} err="failed to get container status \"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\": rpc error: code = NotFound desc = could not find container \"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\": container with ID starting with 5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.103680 4695 scope.go:117] "RemoveContainer" containerID="b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.104127 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d"} err="failed to get container status \"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\": rpc error: code = NotFound desc = could not find container \"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\": container with ID starting with b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.104148 4695 scope.go:117] "RemoveContainer" containerID="bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.104427 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b"} err="failed to get container status \"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\": rpc error: code = NotFound desc = could not find container \"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\": container with ID starting with bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.104450 4695 scope.go:117] "RemoveContainer" containerID="fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.104660 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d"} err="failed to get container status \"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\": rpc error: code = NotFound desc = could not find container \"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\": container with ID starting with fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.104677 4695 scope.go:117] "RemoveContainer" containerID="90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105000 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3"} err="failed to get container status \"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\": rpc error: code = NotFound desc = could not find container \"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\": container with ID starting with 90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105022 4695 scope.go:117] "RemoveContainer" containerID="0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105199 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651"} err="failed to get container status \"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651\": rpc error: code = NotFound desc = could not find container \"0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651\": container with ID starting with 0b0e9b4926e5d09c71f45d8fde4646965020f20ed654d2d3049ec5064a816651 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105216 4695 scope.go:117] "RemoveContainer" containerID="cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105476 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46"} err="failed to get container status \"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\": rpc error: code = NotFound desc = could not find container \"cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46\": container with ID starting with cc33214560f35a96071fdce8f44438cd578feb30b55d84407ed74c3119402a46 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105494 4695 scope.go:117] "RemoveContainer" containerID="75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105745 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16"} err="failed to get container status \"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\": rpc error: code = NotFound desc = could not find container \"75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16\": container with ID starting with 75c29cff62f27d986de27f781e39fade32b9b178bec6b568b1127c297cf66b16 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.105773 4695 scope.go:117] "RemoveContainer" containerID="e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.106078 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3"} err="failed to get container status \"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\": rpc error: code = NotFound desc = could not find container \"e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3\": container with ID starting with e873ebfa34b282687ff61ca5faad0947391c4effc4229eb44e1021b2c66509e3 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.106104 4695 scope.go:117] "RemoveContainer" containerID="5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.106575 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969"} err="failed to get container status \"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\": rpc error: code = NotFound desc = could not find container \"5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969\": container with ID starting with 5d01a781acf3702749780271de97333c19e05fcb99c4e102562a571f5a689969 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.106603 4695 scope.go:117] "RemoveContainer" containerID="b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.106896 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d"} err="failed to get container status \"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\": rpc error: code = NotFound desc = could not find container \"b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d\": container with ID starting with b8c23456331792d46dbe56e3a50a3db1050e26dfaf738e1663b5b66bce2e3f4d not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.106954 4695 scope.go:117] "RemoveContainer" containerID="bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.107296 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b"} err="failed to get container status \"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\": rpc error: code = NotFound desc = could not find container \"bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b\": container with ID starting with bc0cc5c9213cbfc6602a62e189ba0b5c527e503457a2a24980aba153b448a44b not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.107323 4695 scope.go:117] "RemoveContainer" containerID="fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.107802 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d"} err="failed to get container status \"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\": rpc error: code = NotFound desc = could not find container \"fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d\": container with ID starting with fd2e3e3360420250bcd21da9f024e5f26ac0721cf3c79acf19bbc305083fd05d not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.107827 4695 scope.go:117] "RemoveContainer" containerID="90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.108130 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3"} err="failed to get container status \"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\": rpc error: code = NotFound desc = could not find container \"90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3\": container with ID starting with 90b607e204ba7ee5db36062e21d2c68baeabb2d05723844af64be6aa748547d3 not found: ID does not exist" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.169884 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa56d8f-ad6a-4761-ad93-58a109b0a9a3" path="/var/lib/kubelet/pods/5fa56d8f-ad6a-4761-ad93-58a109b0a9a3/volumes" Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.922576 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"3bf0b330692f75024bf2bed6b6c160f635c4687352d22659cd60a7a7b8207981"} Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.922839 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"837a50ba9ed90af8651a4a59eecf68832478520b53b7f0031c1a33256e17a320"} Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.922851 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"485bb900cdf6e0b1341dd5a3a632cdf2769cfb54301c04d22d17ee3cccc2df1b"} Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.922859 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"fea55ae6547ccf890bdd610d5d948035732f19aab0a814e41fc0bd7152bc3f58"} Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.922868 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"32425273fd2d412ffc8e0b5b7da952305391b544ff2a562f3c2788f4c470edda"} Nov 26 13:36:03 crc kubenswrapper[4695]: I1126 13:36:03.922877 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"bc39b6ad6b81ff3842f626c0141e76ee34142205da342fb921effff188fbb0c7"} Nov 26 13:36:05 crc kubenswrapper[4695]: I1126 13:36:05.936284 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"9f5989b5b50020773c315a9024f3fdb0988ce86078608e3aba9db43afc59d8ea"} Nov 26 13:36:06 crc kubenswrapper[4695]: I1126 13:36:06.396744 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:36:06 crc kubenswrapper[4695]: I1126 13:36:06.396821 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:36:06 crc kubenswrapper[4695]: I1126 13:36:06.970381 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-nczqg" Nov 26 13:36:09 crc kubenswrapper[4695]: I1126 13:36:09.960808 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" event={"ID":"98ef5da6-3da8-4add-9ca3-5e02deb0f1a3","Type":"ContainerStarted","Data":"b4e1c900694f90df8be564bc9cea7050457319b0b847b341a1cb7570fd7c2541"} Nov 26 13:36:09 crc kubenswrapper[4695]: I1126 13:36:09.961543 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:09 crc kubenswrapper[4695]: I1126 13:36:09.961568 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:09 crc kubenswrapper[4695]: I1126 13:36:09.990430 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:09 crc kubenswrapper[4695]: I1126 13:36:09.994582 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" podStartSLOduration=7.994569173 podStartE2EDuration="7.994569173s" podCreationTimestamp="2025-11-26 13:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:36:09.992337833 +0000 UTC m=+753.628162915" watchObservedRunningTime="2025-11-26 13:36:09.994569173 +0000 UTC m=+753.630394265" Nov 26 13:36:10 crc kubenswrapper[4695]: I1126 13:36:10.015951 4695 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 26 13:36:10 crc kubenswrapper[4695]: I1126 13:36:10.966520 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:10 crc kubenswrapper[4695]: I1126 13:36:10.993577 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.686514 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5j548"] Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.688779 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.694517 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j548"] Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.837738 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-utilities\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.837821 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-catalog-content\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.837849 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcx9w\" (UniqueName: \"kubernetes.io/projected/e62e26a4-2b39-43d5-893a-68a5b5624838-kube-api-access-qcx9w\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.938530 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-utilities\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.938613 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-catalog-content\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.938655 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcx9w\" (UniqueName: \"kubernetes.io/projected/e62e26a4-2b39-43d5-893a-68a5b5624838-kube-api-access-qcx9w\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.939141 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-utilities\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.939221 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-catalog-content\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:30 crc kubenswrapper[4695]: I1126 13:36:30.959927 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcx9w\" (UniqueName: \"kubernetes.io/projected/e62e26a4-2b39-43d5-893a-68a5b5624838-kube-api-access-qcx9w\") pod \"redhat-marketplace-5j548\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:31 crc kubenswrapper[4695]: I1126 13:36:31.008960 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:31 crc kubenswrapper[4695]: I1126 13:36:31.445892 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j548"] Nov 26 13:36:32 crc kubenswrapper[4695]: I1126 13:36:32.089760 4695 generic.go:334] "Generic (PLEG): container finished" podID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerID="49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e" exitCode=0 Nov 26 13:36:32 crc kubenswrapper[4695]: I1126 13:36:32.089868 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j548" event={"ID":"e62e26a4-2b39-43d5-893a-68a5b5624838","Type":"ContainerDied","Data":"49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e"} Nov 26 13:36:32 crc kubenswrapper[4695]: I1126 13:36:32.090074 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j548" event={"ID":"e62e26a4-2b39-43d5-893a-68a5b5624838","Type":"ContainerStarted","Data":"f654c9060a78a884c20d663e0f1bd790ca9fee42bf854e10da86f7c7fd15cef0"} Nov 26 13:36:32 crc kubenswrapper[4695]: I1126 13:36:32.456425 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f9kwn" Nov 26 13:36:33 crc kubenswrapper[4695]: I1126 13:36:33.099806 4695 generic.go:334] "Generic (PLEG): container finished" podID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerID="81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed" exitCode=0 Nov 26 13:36:33 crc kubenswrapper[4695]: I1126 13:36:33.099860 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j548" event={"ID":"e62e26a4-2b39-43d5-893a-68a5b5624838","Type":"ContainerDied","Data":"81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed"} Nov 26 13:36:34 crc kubenswrapper[4695]: I1126 13:36:34.108592 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j548" event={"ID":"e62e26a4-2b39-43d5-893a-68a5b5624838","Type":"ContainerStarted","Data":"a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934"} Nov 26 13:36:34 crc kubenswrapper[4695]: I1126 13:36:34.126133 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5j548" podStartSLOduration=2.494414115 podStartE2EDuration="4.126116259s" podCreationTimestamp="2025-11-26 13:36:30 +0000 UTC" firstStartedPulling="2025-11-26 13:36:32.09319984 +0000 UTC m=+775.729024972" lastFinishedPulling="2025-11-26 13:36:33.724902024 +0000 UTC m=+777.360727116" observedRunningTime="2025-11-26 13:36:34.125486889 +0000 UTC m=+777.761311971" watchObservedRunningTime="2025-11-26 13:36:34.126116259 +0000 UTC m=+777.761941341" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.273813 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhfbz"] Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.275579 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.282827 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhfbz"] Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.313459 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-utilities\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.313509 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.313556 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2p2r\" (UniqueName: \"kubernetes.io/projected/07775543-972d-4e28-be0b-ae5c2e0bfcdd-kube-api-access-g2p2r\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.397469 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.397773 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.397904 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.398633 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"84b7005908bcbacbd029ad98335f5091a83ffcb04f71e35adcdf1c55fac6fce2"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.398851 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://84b7005908bcbacbd029ad98335f5091a83ffcb04f71e35adcdf1c55fac6fce2" gracePeriod=600 Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.414323 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-utilities\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.414417 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.414496 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2p2r\" (UniqueName: \"kubernetes.io/projected/07775543-972d-4e28-be0b-ae5c2e0bfcdd-kube-api-access-g2p2r\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.414952 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-utilities\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.415374 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.439872 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2p2r\" (UniqueName: \"kubernetes.io/projected/07775543-972d-4e28-be0b-ae5c2e0bfcdd-kube-api-access-g2p2r\") pod \"community-operators-zhfbz\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:36 crc kubenswrapper[4695]: I1126 13:36:36.591592 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:37 crc kubenswrapper[4695]: I1126 13:36:37.119788 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhfbz"] Nov 26 13:36:37 crc kubenswrapper[4695]: W1126 13:36:37.125060 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07775543_972d_4e28_be0b_ae5c2e0bfcdd.slice/crio-fb7a2ba202883c382907cfc56b83406d623ec8433fa0d442617ffe4b9a2e2d34 WatchSource:0}: Error finding container fb7a2ba202883c382907cfc56b83406d623ec8433fa0d442617ffe4b9a2e2d34: Status 404 returned error can't find the container with id fb7a2ba202883c382907cfc56b83406d623ec8433fa0d442617ffe4b9a2e2d34 Nov 26 13:36:37 crc kubenswrapper[4695]: I1126 13:36:37.129154 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="84b7005908bcbacbd029ad98335f5091a83ffcb04f71e35adcdf1c55fac6fce2" exitCode=0 Nov 26 13:36:37 crc kubenswrapper[4695]: I1126 13:36:37.129191 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"84b7005908bcbacbd029ad98335f5091a83ffcb04f71e35adcdf1c55fac6fce2"} Nov 26 13:36:37 crc kubenswrapper[4695]: I1126 13:36:37.129217 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"5db1765d388a4f8bc2fce4e005f968080dbee8df86cf9973d4a9582128ebd4df"} Nov 26 13:36:37 crc kubenswrapper[4695]: I1126 13:36:37.129237 4695 scope.go:117] "RemoveContainer" containerID="ded9ef810e781283318e5bdddf519279febd066b4df707f4e1fff9dc7f532238" Nov 26 13:36:38 crc kubenswrapper[4695]: I1126 13:36:38.137096 4695 generic.go:334] "Generic (PLEG): container finished" podID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerID="4015066dcee0fd5cdb2fe803a6167a04145f056f23bec48027d4adb68b677712" exitCode=0 Nov 26 13:36:38 crc kubenswrapper[4695]: I1126 13:36:38.137581 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhfbz" event={"ID":"07775543-972d-4e28-be0b-ae5c2e0bfcdd","Type":"ContainerDied","Data":"4015066dcee0fd5cdb2fe803a6167a04145f056f23bec48027d4adb68b677712"} Nov 26 13:36:38 crc kubenswrapper[4695]: I1126 13:36:38.138015 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhfbz" event={"ID":"07775543-972d-4e28-be0b-ae5c2e0bfcdd","Type":"ContainerStarted","Data":"fb7a2ba202883c382907cfc56b83406d623ec8433fa0d442617ffe4b9a2e2d34"} Nov 26 13:36:39 crc kubenswrapper[4695]: I1126 13:36:39.149588 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhfbz" event={"ID":"07775543-972d-4e28-be0b-ae5c2e0bfcdd","Type":"ContainerStarted","Data":"6baeb7439c5485dcfc060eb87e94e85c12b6344dfd41fa46e2a081c414417795"} Nov 26 13:36:40 crc kubenswrapper[4695]: I1126 13:36:40.157097 4695 generic.go:334] "Generic (PLEG): container finished" podID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerID="6baeb7439c5485dcfc060eb87e94e85c12b6344dfd41fa46e2a081c414417795" exitCode=0 Nov 26 13:36:40 crc kubenswrapper[4695]: I1126 13:36:40.157159 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhfbz" event={"ID":"07775543-972d-4e28-be0b-ae5c2e0bfcdd","Type":"ContainerDied","Data":"6baeb7439c5485dcfc060eb87e94e85c12b6344dfd41fa46e2a081c414417795"} Nov 26 13:36:41 crc kubenswrapper[4695]: I1126 13:36:41.009203 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:41 crc kubenswrapper[4695]: I1126 13:36:41.009791 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:41 crc kubenswrapper[4695]: I1126 13:36:41.053210 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:41 crc kubenswrapper[4695]: I1126 13:36:41.170461 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhfbz" event={"ID":"07775543-972d-4e28-be0b-ae5c2e0bfcdd","Type":"ContainerStarted","Data":"3e8c0c5188069f5512aff92cb8c0decab0659788e0b96efece1d1a62a77fbc1f"} Nov 26 13:36:41 crc kubenswrapper[4695]: I1126 13:36:41.194783 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhfbz" podStartSLOduration=2.4487209979999998 podStartE2EDuration="5.194756925s" podCreationTimestamp="2025-11-26 13:36:36 +0000 UTC" firstStartedPulling="2025-11-26 13:36:38.139919399 +0000 UTC m=+781.775744471" lastFinishedPulling="2025-11-26 13:36:40.885955326 +0000 UTC m=+784.521780398" observedRunningTime="2025-11-26 13:36:41.187209636 +0000 UTC m=+784.823034718" watchObservedRunningTime="2025-11-26 13:36:41.194756925 +0000 UTC m=+784.830582007" Nov 26 13:36:41 crc kubenswrapper[4695]: I1126 13:36:41.209600 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.460852 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j548"] Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.462061 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5j548" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="registry-server" containerID="cri-o://a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934" gracePeriod=2 Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.821615 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.919146 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcx9w\" (UniqueName: \"kubernetes.io/projected/e62e26a4-2b39-43d5-893a-68a5b5624838-kube-api-access-qcx9w\") pod \"e62e26a4-2b39-43d5-893a-68a5b5624838\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.919311 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-utilities\") pod \"e62e26a4-2b39-43d5-893a-68a5b5624838\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.919387 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-catalog-content\") pod \"e62e26a4-2b39-43d5-893a-68a5b5624838\" (UID: \"e62e26a4-2b39-43d5-893a-68a5b5624838\") " Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.920394 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-utilities" (OuterVolumeSpecName: "utilities") pod "e62e26a4-2b39-43d5-893a-68a5b5624838" (UID: "e62e26a4-2b39-43d5-893a-68a5b5624838"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.928747 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e62e26a4-2b39-43d5-893a-68a5b5624838-kube-api-access-qcx9w" (OuterVolumeSpecName: "kube-api-access-qcx9w") pod "e62e26a4-2b39-43d5-893a-68a5b5624838" (UID: "e62e26a4-2b39-43d5-893a-68a5b5624838"). InnerVolumeSpecName "kube-api-access-qcx9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:43 crc kubenswrapper[4695]: I1126 13:36:43.951938 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e62e26a4-2b39-43d5-893a-68a5b5624838" (UID: "e62e26a4-2b39-43d5-893a-68a5b5624838"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.020566 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.020603 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e62e26a4-2b39-43d5-893a-68a5b5624838-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.020619 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcx9w\" (UniqueName: \"kubernetes.io/projected/e62e26a4-2b39-43d5-893a-68a5b5624838-kube-api-access-qcx9w\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.199769 4695 generic.go:334] "Generic (PLEG): container finished" podID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerID="a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934" exitCode=0 Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.199810 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5j548" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.199835 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j548" event={"ID":"e62e26a4-2b39-43d5-893a-68a5b5624838","Type":"ContainerDied","Data":"a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934"} Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.199967 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5j548" event={"ID":"e62e26a4-2b39-43d5-893a-68a5b5624838","Type":"ContainerDied","Data":"f654c9060a78a884c20d663e0f1bd790ca9fee42bf854e10da86f7c7fd15cef0"} Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.199988 4695 scope.go:117] "RemoveContainer" containerID="a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.224047 4695 scope.go:117] "RemoveContainer" containerID="81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.237593 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j548"] Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.246154 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5j548"] Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.264918 4695 scope.go:117] "RemoveContainer" containerID="49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.280521 4695 scope.go:117] "RemoveContainer" containerID="a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934" Nov 26 13:36:44 crc kubenswrapper[4695]: E1126 13:36:44.280886 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934\": container with ID starting with a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934 not found: ID does not exist" containerID="a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.280983 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934"} err="failed to get container status \"a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934\": rpc error: code = NotFound desc = could not find container \"a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934\": container with ID starting with a191c5f9bca654dd20d93ba6154f6df606568ec3ab3fd5e2067c2aeb1aa6d934 not found: ID does not exist" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.281067 4695 scope.go:117] "RemoveContainer" containerID="81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed" Nov 26 13:36:44 crc kubenswrapper[4695]: E1126 13:36:44.281407 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed\": container with ID starting with 81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed not found: ID does not exist" containerID="81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.281513 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed"} err="failed to get container status \"81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed\": rpc error: code = NotFound desc = could not find container \"81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed\": container with ID starting with 81cf5369825eeea88f56d52df006200ba23d3fd5f4e95e5e12a908e1449731ed not found: ID does not exist" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.281588 4695 scope.go:117] "RemoveContainer" containerID="49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e" Nov 26 13:36:44 crc kubenswrapper[4695]: E1126 13:36:44.281903 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e\": container with ID starting with 49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e not found: ID does not exist" containerID="49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e" Nov 26 13:36:44 crc kubenswrapper[4695]: I1126 13:36:44.281979 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e"} err="failed to get container status \"49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e\": rpc error: code = NotFound desc = could not find container \"49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e\": container with ID starting with 49f98804b99bd1641c2b2889d98685e58dae57b59fb7019150e16decd7156a3e not found: ID does not exist" Nov 26 13:36:45 crc kubenswrapper[4695]: I1126 13:36:45.171894 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" path="/var/lib/kubelet/pods/e62e26a4-2b39-43d5-893a-68a5b5624838/volumes" Nov 26 13:36:46 crc kubenswrapper[4695]: I1126 13:36:46.591886 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:46 crc kubenswrapper[4695]: I1126 13:36:46.592630 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:46 crc kubenswrapper[4695]: I1126 13:36:46.658614 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:47 crc kubenswrapper[4695]: I1126 13:36:47.267161 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:48 crc kubenswrapper[4695]: I1126 13:36:48.452423 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhfbz"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.235760 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zhfbz" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="registry-server" containerID="cri-o://3e8c0c5188069f5512aff92cb8c0decab0659788e0b96efece1d1a62a77fbc1f" gracePeriod=2 Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.805157 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvcb5"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.805905 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jvcb5" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="registry-server" containerID="cri-o://9dbcbd0e8241da877762ff6724e326b05c69e34f8bcb920b9504a24fa62e1b31" gracePeriod=30 Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.810949 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxxzl"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.811225 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lxxzl" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="registry-server" containerID="cri-o://4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5" gracePeriod=30 Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.822828 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xxs2"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.823062 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" podUID="27975591-c7e9-4e85-96a9-2a1280f6e1f3" containerName="marketplace-operator" containerID="cri-o://3c5b9eaeab40cc2c73dba5c0cfce9d61d02cb323b06ea6586dd3467615a6e207" gracePeriod=30 Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.836086 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twktp"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.836574 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-twktp" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="registry-server" containerID="cri-o://d2d0e0316c71e568f52d26c5c248b8d4c29373d6ac367b65932294746b637088" gracePeriod=30 Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.840047 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kgsbp"] Nov 26 13:36:50 crc kubenswrapper[4695]: E1126 13:36:50.840307 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="extract-content" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.840330 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="extract-content" Nov 26 13:36:50 crc kubenswrapper[4695]: E1126 13:36:50.840365 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="extract-utilities" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.840376 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="extract-utilities" Nov 26 13:36:50 crc kubenswrapper[4695]: E1126 13:36:50.840385 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="registry-server" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.840394 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="registry-server" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.840517 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e62e26a4-2b39-43d5-893a-68a5b5624838" containerName="registry-server" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.840948 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:50 crc kubenswrapper[4695]: E1126 13:36:50.855962 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5" cmd=["grpc_health_probe","-addr=:50051"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.857996 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnwwr"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.858258 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dnwwr" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="registry-server" containerID="cri-o://170822a7a1f0f9261ef6d123ee4ccfa211e69e221eba47b10006b5fbe68af343" gracePeriod=30 Nov 26 13:36:50 crc kubenswrapper[4695]: E1126 13:36:50.859052 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5" cmd=["grpc_health_probe","-addr=:50051"] Nov 26 13:36:50 crc kubenswrapper[4695]: E1126 13:36:50.863117 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5 is running failed: container process not found" containerID="4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5" cmd=["grpc_health_probe","-addr=:50051"] Nov 26 13:36:50 crc kubenswrapper[4695]: E1126 13:36:50.863181 4695 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-lxxzl" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="registry-server" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.875248 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kgsbp"] Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.914032 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9sq\" (UniqueName: \"kubernetes.io/projected/bfc137bd-03b5-4b18-a610-f713f2681cc1-kube-api-access-xv9sq\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.914126 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfc137bd-03b5-4b18-a610-f713f2681cc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:50 crc kubenswrapper[4695]: I1126 13:36:50.914234 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bfc137bd-03b5-4b18-a610-f713f2681cc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.015338 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfc137bd-03b5-4b18-a610-f713f2681cc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.015416 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bfc137bd-03b5-4b18-a610-f713f2681cc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.015477 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9sq\" (UniqueName: \"kubernetes.io/projected/bfc137bd-03b5-4b18-a610-f713f2681cc1-kube-api-access-xv9sq\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.016450 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bfc137bd-03b5-4b18-a610-f713f2681cc1-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.032077 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bfc137bd-03b5-4b18-a610-f713f2681cc1-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.035121 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9sq\" (UniqueName: \"kubernetes.io/projected/bfc137bd-03b5-4b18-a610-f713f2681cc1-kube-api-access-xv9sq\") pod \"marketplace-operator-79b997595-kgsbp\" (UID: \"bfc137bd-03b5-4b18-a610-f713f2681cc1\") " pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.172515 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.241567 4695 generic.go:334] "Generic (PLEG): container finished" podID="812ad6aa-bb31-433e-886d-2518da5bb809" containerID="9dbcbd0e8241da877762ff6724e326b05c69e34f8bcb920b9504a24fa62e1b31" exitCode=0 Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.241634 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvcb5" event={"ID":"812ad6aa-bb31-433e-886d-2518da5bb809","Type":"ContainerDied","Data":"9dbcbd0e8241da877762ff6724e326b05c69e34f8bcb920b9504a24fa62e1b31"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.241662 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jvcb5" event={"ID":"812ad6aa-bb31-433e-886d-2518da5bb809","Type":"ContainerDied","Data":"5f1ea4584917010e8b019bc2b82f9e984c02c878b9e637af694010f3539f1dd3"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.241672 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1ea4584917010e8b019bc2b82f9e984c02c878b9e637af694010f3539f1dd3" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.250445 4695 generic.go:334] "Generic (PLEG): container finished" podID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerID="170822a7a1f0f9261ef6d123ee4ccfa211e69e221eba47b10006b5fbe68af343" exitCode=0 Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.250551 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwwr" event={"ID":"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f","Type":"ContainerDied","Data":"170822a7a1f0f9261ef6d123ee4ccfa211e69e221eba47b10006b5fbe68af343"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.253200 4695 generic.go:334] "Generic (PLEG): container finished" podID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerID="3e8c0c5188069f5512aff92cb8c0decab0659788e0b96efece1d1a62a77fbc1f" exitCode=0 Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.253266 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhfbz" event={"ID":"07775543-972d-4e28-be0b-ae5c2e0bfcdd","Type":"ContainerDied","Data":"3e8c0c5188069f5512aff92cb8c0decab0659788e0b96efece1d1a62a77fbc1f"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.253298 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhfbz" event={"ID":"07775543-972d-4e28-be0b-ae5c2e0bfcdd","Type":"ContainerDied","Data":"fb7a2ba202883c382907cfc56b83406d623ec8433fa0d442617ffe4b9a2e2d34"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.253312 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7a2ba202883c382907cfc56b83406d623ec8433fa0d442617ffe4b9a2e2d34" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.255397 4695 generic.go:334] "Generic (PLEG): container finished" podID="27975591-c7e9-4e85-96a9-2a1280f6e1f3" containerID="3c5b9eaeab40cc2c73dba5c0cfce9d61d02cb323b06ea6586dd3467615a6e207" exitCode=0 Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.255448 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" event={"ID":"27975591-c7e9-4e85-96a9-2a1280f6e1f3","Type":"ContainerDied","Data":"3c5b9eaeab40cc2c73dba5c0cfce9d61d02cb323b06ea6586dd3467615a6e207"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.257246 4695 generic.go:334] "Generic (PLEG): container finished" podID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerID="d2d0e0316c71e568f52d26c5c248b8d4c29373d6ac367b65932294746b637088" exitCode=0 Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.257296 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twktp" event={"ID":"71e69014-c2bb-4ab6-88ce-a318d11a0b1c","Type":"ContainerDied","Data":"d2d0e0316c71e568f52d26c5c248b8d4c29373d6ac367b65932294746b637088"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.259290 4695 generic.go:334] "Generic (PLEG): container finished" podID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerID="4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5" exitCode=0 Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.259317 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxxzl" event={"ID":"495bad95-1540-4a1a-b6bb-bfabf3683c2a","Type":"ContainerDied","Data":"4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5"} Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.267858 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.285927 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.287410 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.287696 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.301444 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.356417 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.421878 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-catalog-content\") pod \"812ad6aa-bb31-433e-886d-2518da5bb809\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.421920 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-catalog-content\") pod \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.421950 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rmw7\" (UniqueName: \"kubernetes.io/projected/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-kube-api-access-2rmw7\") pod \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422043 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-utilities\") pod \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422075 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bwgl\" (UniqueName: \"kubernetes.io/projected/812ad6aa-bb31-433e-886d-2518da5bb809-kube-api-access-7bwgl\") pod \"812ad6aa-bb31-433e-886d-2518da5bb809\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422092 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content\") pod \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422108 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7cjh\" (UniqueName: \"kubernetes.io/projected/495bad95-1540-4a1a-b6bb-bfabf3683c2a-kube-api-access-r7cjh\") pod \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422124 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-utilities\") pod \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422153 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-catalog-content\") pod \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\" (UID: \"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422192 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2p2r\" (UniqueName: \"kubernetes.io/projected/07775543-972d-4e28-be0b-ae5c2e0bfcdd-kube-api-access-g2p2r\") pod \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422213 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-catalog-content\") pod \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\" (UID: \"495bad95-1540-4a1a-b6bb-bfabf3683c2a\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422237 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frt9s\" (UniqueName: \"kubernetes.io/projected/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-kube-api-access-frt9s\") pod \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422260 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-utilities\") pod \"812ad6aa-bb31-433e-886d-2518da5bb809\" (UID: \"812ad6aa-bb31-433e-886d-2518da5bb809\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422277 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-utilities\") pod \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\" (UID: \"71e69014-c2bb-4ab6-88ce-a318d11a0b1c\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.422301 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-utilities\") pod \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.424427 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-utilities" (OuterVolumeSpecName: "utilities") pod "ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" (UID: "ee21e334-09ee-4b6f-bcb9-76fd3fc2934f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.426926 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-utilities" (OuterVolumeSpecName: "utilities") pod "812ad6aa-bb31-433e-886d-2518da5bb809" (UID: "812ad6aa-bb31-433e-886d-2518da5bb809"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.426988 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-utilities" (OuterVolumeSpecName: "utilities") pod "07775543-972d-4e28-be0b-ae5c2e0bfcdd" (UID: "07775543-972d-4e28-be0b-ae5c2e0bfcdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.427192 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-utilities" (OuterVolumeSpecName: "utilities") pod "71e69014-c2bb-4ab6-88ce-a318d11a0b1c" (UID: "71e69014-c2bb-4ab6-88ce-a318d11a0b1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.427493 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-utilities" (OuterVolumeSpecName: "utilities") pod "495bad95-1540-4a1a-b6bb-bfabf3683c2a" (UID: "495bad95-1540-4a1a-b6bb-bfabf3683c2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.430289 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-kube-api-access-frt9s" (OuterVolumeSpecName: "kube-api-access-frt9s") pod "71e69014-c2bb-4ab6-88ce-a318d11a0b1c" (UID: "71e69014-c2bb-4ab6-88ce-a318d11a0b1c"). InnerVolumeSpecName "kube-api-access-frt9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.431476 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-kube-api-access-2rmw7" (OuterVolumeSpecName: "kube-api-access-2rmw7") pod "ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" (UID: "ee21e334-09ee-4b6f-bcb9-76fd3fc2934f"). InnerVolumeSpecName "kube-api-access-2rmw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.434865 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07775543-972d-4e28-be0b-ae5c2e0bfcdd-kube-api-access-g2p2r" (OuterVolumeSpecName: "kube-api-access-g2p2r") pod "07775543-972d-4e28-be0b-ae5c2e0bfcdd" (UID: "07775543-972d-4e28-be0b-ae5c2e0bfcdd"). InnerVolumeSpecName "kube-api-access-g2p2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.443817 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812ad6aa-bb31-433e-886d-2518da5bb809-kube-api-access-7bwgl" (OuterVolumeSpecName: "kube-api-access-7bwgl") pod "812ad6aa-bb31-433e-886d-2518da5bb809" (UID: "812ad6aa-bb31-433e-886d-2518da5bb809"). InnerVolumeSpecName "kube-api-access-7bwgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.466514 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/495bad95-1540-4a1a-b6bb-bfabf3683c2a-kube-api-access-r7cjh" (OuterVolumeSpecName: "kube-api-access-r7cjh") pod "495bad95-1540-4a1a-b6bb-bfabf3683c2a" (UID: "495bad95-1540-4a1a-b6bb-bfabf3683c2a"). InnerVolumeSpecName "kube-api-access-r7cjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499409 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62dgx"] Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499700 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499713 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499723 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499731 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499744 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499751 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499759 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499766 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499777 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499783 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499795 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499802 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499812 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499819 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499831 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499839 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499847 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499853 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499860 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499866 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499873 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27975591-c7e9-4e85-96a9-2a1280f6e1f3" containerName="marketplace-operator" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499881 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="27975591-c7e9-4e85-96a9-2a1280f6e1f3" containerName="marketplace-operator" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499892 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499899 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499910 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499916 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499926 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499933 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="extract-content" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499942 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499950 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: E1126 13:36:51.499960 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.499969 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="extract-utilities" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.500073 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.500090 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.500099 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.500108 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.500117 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" containerName="registry-server" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.500129 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="27975591-c7e9-4e85-96a9-2a1280f6e1f3" containerName="marketplace-operator" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.500979 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.503167 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "812ad6aa-bb31-433e-886d-2518da5bb809" (UID: "812ad6aa-bb31-433e-886d-2518da5bb809"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.523021 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68dx7\" (UniqueName: \"kubernetes.io/projected/27975591-c7e9-4e85-96a9-2a1280f6e1f3-kube-api-access-68dx7\") pod \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.526283 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07775543-972d-4e28-be0b-ae5c2e0bfcdd" (UID: "07775543-972d-4e28-be0b-ae5c2e0bfcdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.531692 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-trusted-ca\") pod \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.531867 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content\") pod \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\" (UID: \"07775543-972d-4e28-be0b-ae5c2e0bfcdd\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.531989 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-operator-metrics\") pod \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\" (UID: \"27975591-c7e9-4e85-96a9-2a1280f6e1f3\") " Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.532234 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6msk\" (UniqueName: \"kubernetes.io/projected/9af3df73-3849-41b1-b1e6-7410f26dcd17-kube-api-access-f6msk\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.532609 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-utilities\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.532813 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-catalog-content\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.532953 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533041 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533119 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533184 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rmw7\" (UniqueName: \"kubernetes.io/projected/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-kube-api-access-2rmw7\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533245 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533307 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bwgl\" (UniqueName: \"kubernetes.io/projected/812ad6aa-bb31-433e-886d-2518da5bb809-kube-api-access-7bwgl\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533400 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7cjh\" (UniqueName: \"kubernetes.io/projected/495bad95-1540-4a1a-b6bb-bfabf3683c2a-kube-api-access-r7cjh\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533478 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533542 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2p2r\" (UniqueName: \"kubernetes.io/projected/07775543-972d-4e28-be0b-ae5c2e0bfcdd-kube-api-access-g2p2r\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533605 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frt9s\" (UniqueName: \"kubernetes.io/projected/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-kube-api-access-frt9s\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.533665 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/812ad6aa-bb31-433e-886d-2518da5bb809-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.534432 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "27975591-c7e9-4e85-96a9-2a1280f6e1f3" (UID: "27975591-c7e9-4e85-96a9-2a1280f6e1f3"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.534534 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71e69014-c2bb-4ab6-88ce-a318d11a0b1c" (UID: "71e69014-c2bb-4ab6-88ce-a318d11a0b1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: W1126 13:36:51.534631 4695 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/07775543-972d-4e28-be0b-ae5c2e0bfcdd/volumes/kubernetes.io~empty-dir/catalog-content Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.534707 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07775543-972d-4e28-be0b-ae5c2e0bfcdd" (UID: "07775543-972d-4e28-be0b-ae5c2e0bfcdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.543545 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62dgx"] Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.544209 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27975591-c7e9-4e85-96a9-2a1280f6e1f3-kube-api-access-68dx7" (OuterVolumeSpecName: "kube-api-access-68dx7") pod "27975591-c7e9-4e85-96a9-2a1280f6e1f3" (UID: "27975591-c7e9-4e85-96a9-2a1280f6e1f3"). InnerVolumeSpecName "kube-api-access-68dx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.560860 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "27975591-c7e9-4e85-96a9-2a1280f6e1f3" (UID: "27975591-c7e9-4e85-96a9-2a1280f6e1f3"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.568513 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "495bad95-1540-4a1a-b6bb-bfabf3683c2a" (UID: "495bad95-1540-4a1a-b6bb-bfabf3683c2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.582943 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" (UID: "ee21e334-09ee-4b6f-bcb9-76fd3fc2934f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.634963 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6msk\" (UniqueName: \"kubernetes.io/projected/9af3df73-3849-41b1-b1e6-7410f26dcd17-kube-api-access-f6msk\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635053 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-utilities\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635116 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-catalog-content\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635359 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/495bad95-1540-4a1a-b6bb-bfabf3683c2a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635464 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71e69014-c2bb-4ab6-88ce-a318d11a0b1c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635524 4695 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635578 4695 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/27975591-c7e9-4e85-96a9-2a1280f6e1f3-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635643 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07775543-972d-4e28-be0b-ae5c2e0bfcdd-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635702 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635800 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68dx7\" (UniqueName: \"kubernetes.io/projected/27975591-c7e9-4e85-96a9-2a1280f6e1f3-kube-api-access-68dx7\") on node \"crc\" DevicePath \"\"" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635607 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-catalog-content\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.635659 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-utilities\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.650882 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6msk\" (UniqueName: \"kubernetes.io/projected/9af3df73-3849-41b1-b1e6-7410f26dcd17-kube-api-access-f6msk\") pod \"certified-operators-62dgx\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.674535 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-kgsbp"] Nov 26 13:36:51 crc kubenswrapper[4695]: W1126 13:36:51.677402 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbfc137bd_03b5_4b18_a610_f713f2681cc1.slice/crio-04b2f83171e185695efa64722548aa31aecbaed5db233d40c98659e6873d833a WatchSource:0}: Error finding container 04b2f83171e185695efa64722548aa31aecbaed5db233d40c98659e6873d833a: Status 404 returned error can't find the container with id 04b2f83171e185695efa64722548aa31aecbaed5db233d40c98659e6873d833a Nov 26 13:36:51 crc kubenswrapper[4695]: I1126 13:36:51.823256 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.034491 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62dgx"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.266137 4695 generic.go:334] "Generic (PLEG): container finished" podID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerID="fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91" exitCode=0 Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.266209 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62dgx" event={"ID":"9af3df73-3849-41b1-b1e6-7410f26dcd17","Type":"ContainerDied","Data":"fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.266279 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62dgx" event={"ID":"9af3df73-3849-41b1-b1e6-7410f26dcd17","Type":"ContainerStarted","Data":"3fb4df9328102bcf0b54483b0c1e5517463d6fbc7d85f95df70b1904935e0115"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.269240 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lxxzl" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.269255 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lxxzl" event={"ID":"495bad95-1540-4a1a-b6bb-bfabf3683c2a","Type":"ContainerDied","Data":"fe382a978d12e8e31c0cf28ea3dc9d093f5d1162f0dc21ae3dfdffd756aac624"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.269320 4695 scope.go:117] "RemoveContainer" containerID="4365cf4728703653162051ed07153950f20e0531e146850e15855d83ce4bcfa5" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.272844 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dnwwr" event={"ID":"ee21e334-09ee-4b6f-bcb9-76fd3fc2934f","Type":"ContainerDied","Data":"6e91e1256ff0e5a7b95fa277571cb3ab87db088a33f3f6675e5dd60a99f94ab6"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.272995 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dnwwr" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.276259 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" event={"ID":"27975591-c7e9-4e85-96a9-2a1280f6e1f3","Type":"ContainerDied","Data":"b19d8db895d9e57bbf96625aef17eae9a28c066d864fd8c5a1f9f5181f59c784"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.276302 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-9xxs2" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.278431 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" event={"ID":"bfc137bd-03b5-4b18-a610-f713f2681cc1","Type":"ContainerStarted","Data":"b36e98983aeda0b336551c5cfe508d76e5f6d5b2e2d2e18b294951080f68c697"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.278471 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" event={"ID":"bfc137bd-03b5-4b18-a610-f713f2681cc1","Type":"ContainerStarted","Data":"04b2f83171e185695efa64722548aa31aecbaed5db233d40c98659e6873d833a"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.278689 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.280199 4695 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-kgsbp container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/healthz\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.280277 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" podUID="bfc137bd-03b5-4b18-a610-f713f2681cc1" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.12:8080/healthz\": dial tcp 10.217.0.12:8080: connect: connection refused" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.281470 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-twktp" event={"ID":"71e69014-c2bb-4ab6-88ce-a318d11a0b1c","Type":"ContainerDied","Data":"0f57f107c680042f3b8a2a2b2c770da1a24674c45bf15084043a91bcc8b98893"} Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.281514 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jvcb5" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.281595 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-twktp" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.281608 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhfbz" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.297527 4695 scope.go:117] "RemoveContainer" containerID="f1d69accabe6493cbdfde9c1ef460dc3296b15d14874f3cd289240528954879f" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.308859 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" podStartSLOduration=2.308792191 podStartE2EDuration="2.308792191s" podCreationTimestamp="2025-11-26 13:36:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:36:52.307514551 +0000 UTC m=+795.943339653" watchObservedRunningTime="2025-11-26 13:36:52.308792191 +0000 UTC m=+795.944617273" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.329690 4695 scope.go:117] "RemoveContainer" containerID="76bf4f4852ecd3c7a3aea66783b5d04519f3192718fe5673f0753e5eebd1023d" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.340897 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhfbz"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.347240 4695 scope.go:117] "RemoveContainer" containerID="170822a7a1f0f9261ef6d123ee4ccfa211e69e221eba47b10006b5fbe68af343" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.350737 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zhfbz"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.360404 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jvcb5"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.364331 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jvcb5"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.377953 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lxxzl"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.386950 4695 scope.go:117] "RemoveContainer" containerID="d5382a362c3df366657086fa8f3965339f71abbd390e0aa772243441b932639b" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.395731 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lxxzl"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.398731 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dnwwr"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.402552 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dnwwr"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.406161 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xxs2"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.406894 4695 scope.go:117] "RemoveContainer" containerID="fe27a0b1d1b1ec798595f639af9ab24a223845e00efe4d045e6aa4a78620c687" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.411526 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-9xxs2"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.424329 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-twktp"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.427638 4695 scope.go:117] "RemoveContainer" containerID="3c5b9eaeab40cc2c73dba5c0cfce9d61d02cb323b06ea6586dd3467615a6e207" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.429640 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-twktp"] Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.441419 4695 scope.go:117] "RemoveContainer" containerID="d2d0e0316c71e568f52d26c5c248b8d4c29373d6ac367b65932294746b637088" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.456809 4695 scope.go:117] "RemoveContainer" containerID="cf5ee6598d6630edd4ba3c8b1baa7e8b77f8b7bb9c6e9b54cef2c7a2f5c0e6ce" Nov 26 13:36:52 crc kubenswrapper[4695]: I1126 13:36:52.472507 4695 scope.go:117] "RemoveContainer" containerID="979b6b2ccb3ccf30c0e011f642ac8a26a2f267fed17a003e53736dead9b8456e" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.170907 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07775543-972d-4e28-be0b-ae5c2e0bfcdd" path="/var/lib/kubelet/pods/07775543-972d-4e28-be0b-ae5c2e0bfcdd/volumes" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.171598 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27975591-c7e9-4e85-96a9-2a1280f6e1f3" path="/var/lib/kubelet/pods/27975591-c7e9-4e85-96a9-2a1280f6e1f3/volumes" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.172076 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="495bad95-1540-4a1a-b6bb-bfabf3683c2a" path="/var/lib/kubelet/pods/495bad95-1540-4a1a-b6bb-bfabf3683c2a/volumes" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.173083 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71e69014-c2bb-4ab6-88ce-a318d11a0b1c" path="/var/lib/kubelet/pods/71e69014-c2bb-4ab6-88ce-a318d11a0b1c/volumes" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.173662 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812ad6aa-bb31-433e-886d-2518da5bb809" path="/var/lib/kubelet/pods/812ad6aa-bb31-433e-886d-2518da5bb809/volumes" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.174625 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee21e334-09ee-4b6f-bcb9-76fd3fc2934f" path="/var/lib/kubelet/pods/ee21e334-09ee-4b6f-bcb9-76fd3fc2934f/volumes" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.292409 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62dgx" event={"ID":"9af3df73-3849-41b1-b1e6-7410f26dcd17","Type":"ContainerStarted","Data":"de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e"} Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.297979 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-kgsbp" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.866067 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8kklb"] Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.869747 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.873729 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.889143 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kklb"] Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.973485 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpkb7\" (UniqueName: \"kubernetes.io/projected/6423e816-b0ce-4d7d-9077-a787cb8d71ba-kube-api-access-wpkb7\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.973797 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6423e816-b0ce-4d7d-9077-a787cb8d71ba-utilities\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:53 crc kubenswrapper[4695]: I1126 13:36:53.973990 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6423e816-b0ce-4d7d-9077-a787cb8d71ba-catalog-content\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.075967 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpkb7\" (UniqueName: \"kubernetes.io/projected/6423e816-b0ce-4d7d-9077-a787cb8d71ba-kube-api-access-wpkb7\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.076054 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6423e816-b0ce-4d7d-9077-a787cb8d71ba-utilities\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.076127 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6423e816-b0ce-4d7d-9077-a787cb8d71ba-catalog-content\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.077236 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6423e816-b0ce-4d7d-9077-a787cb8d71ba-utilities\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.077596 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6423e816-b0ce-4d7d-9077-a787cb8d71ba-catalog-content\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.098300 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpkb7\" (UniqueName: \"kubernetes.io/projected/6423e816-b0ce-4d7d-9077-a787cb8d71ba-kube-api-access-wpkb7\") pod \"redhat-marketplace-8kklb\" (UID: \"6423e816-b0ce-4d7d-9077-a787cb8d71ba\") " pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.198743 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.313020 4695 generic.go:334] "Generic (PLEG): container finished" podID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerID="de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e" exitCode=0 Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.313132 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62dgx" event={"ID":"9af3df73-3849-41b1-b1e6-7410f26dcd17","Type":"ContainerDied","Data":"de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e"} Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.405623 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8kklb"] Nov 26 13:36:54 crc kubenswrapper[4695]: W1126 13:36:54.414532 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6423e816_b0ce_4d7d_9077_a787cb8d71ba.slice/crio-80ebce3b78edfda4bc219f0234c77aeb0bac0960e45294c9e93049999e0b28e7 WatchSource:0}: Error finding container 80ebce3b78edfda4bc219f0234c77aeb0bac0960e45294c9e93049999e0b28e7: Status 404 returned error can't find the container with id 80ebce3b78edfda4bc219f0234c77aeb0bac0960e45294c9e93049999e0b28e7 Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.460885 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qhr75"] Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.462168 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.463778 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.471049 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhr75"] Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.481318 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-catalog-content\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.481405 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6k79\" (UniqueName: \"kubernetes.io/projected/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-kube-api-access-m6k79\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.481606 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-utilities\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.583020 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-catalog-content\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.583097 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6k79\" (UniqueName: \"kubernetes.io/projected/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-kube-api-access-m6k79\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.583169 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-utilities\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.583600 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-catalog-content\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.583689 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-utilities\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.600530 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6k79\" (UniqueName: \"kubernetes.io/projected/022ac9aa-fecd-4f12-89ac-c3ed0dd88270-kube-api-access-m6k79\") pod \"redhat-operators-qhr75\" (UID: \"022ac9aa-fecd-4f12-89ac-c3ed0dd88270\") " pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.819775 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.867522 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-blppl"] Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.872218 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.883885 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blppl"] Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.887195 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-catalog-content\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.887469 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2s7\" (UniqueName: \"kubernetes.io/projected/804c7db3-4464-44a3-8fba-55d98f498bc7-kube-api-access-qz2s7\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.887561 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-utilities\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.989146 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-utilities\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.989249 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-catalog-content\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.989305 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2s7\" (UniqueName: \"kubernetes.io/projected/804c7db3-4464-44a3-8fba-55d98f498bc7-kube-api-access-qz2s7\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.989792 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-utilities\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:54 crc kubenswrapper[4695]: I1126 13:36:54.990173 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-catalog-content\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.018766 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2s7\" (UniqueName: \"kubernetes.io/projected/804c7db3-4464-44a3-8fba-55d98f498bc7-kube-api-access-qz2s7\") pod \"redhat-operators-blppl\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.113214 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qhr75"] Nov 26 13:36:55 crc kubenswrapper[4695]: W1126 13:36:55.121564 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod022ac9aa_fecd_4f12_89ac_c3ed0dd88270.slice/crio-2792664ab793b085d7e6684f772055365738f012f99b539d29944e95f1431ebe WatchSource:0}: Error finding container 2792664ab793b085d7e6684f772055365738f012f99b539d29944e95f1431ebe: Status 404 returned error can't find the container with id 2792664ab793b085d7e6684f772055365738f012f99b539d29944e95f1431ebe Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.197342 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.320430 4695 generic.go:334] "Generic (PLEG): container finished" podID="6423e816-b0ce-4d7d-9077-a787cb8d71ba" containerID="1637de68baa3e6d7d3fbadd2d081d8e87233d8acf61ae5f9aa4e2edc24a4b07d" exitCode=0 Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.320524 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kklb" event={"ID":"6423e816-b0ce-4d7d-9077-a787cb8d71ba","Type":"ContainerDied","Data":"1637de68baa3e6d7d3fbadd2d081d8e87233d8acf61ae5f9aa4e2edc24a4b07d"} Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.320584 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kklb" event={"ID":"6423e816-b0ce-4d7d-9077-a787cb8d71ba","Type":"ContainerStarted","Data":"80ebce3b78edfda4bc219f0234c77aeb0bac0960e45294c9e93049999e0b28e7"} Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.323207 4695 generic.go:334] "Generic (PLEG): container finished" podID="022ac9aa-fecd-4f12-89ac-c3ed0dd88270" containerID="112f24148355d24f3786c82a9ad501f973c28ff71809c4b5b99ce6f665214cf9" exitCode=0 Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.323263 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhr75" event={"ID":"022ac9aa-fecd-4f12-89ac-c3ed0dd88270","Type":"ContainerDied","Data":"112f24148355d24f3786c82a9ad501f973c28ff71809c4b5b99ce6f665214cf9"} Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.323284 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhr75" event={"ID":"022ac9aa-fecd-4f12-89ac-c3ed0dd88270","Type":"ContainerStarted","Data":"2792664ab793b085d7e6684f772055365738f012f99b539d29944e95f1431ebe"} Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.330655 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62dgx" event={"ID":"9af3df73-3849-41b1-b1e6-7410f26dcd17","Type":"ContainerStarted","Data":"9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9"} Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.374956 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62dgx" podStartSLOduration=1.71226909 podStartE2EDuration="4.374935373s" podCreationTimestamp="2025-11-26 13:36:51 +0000 UTC" firstStartedPulling="2025-11-26 13:36:52.267826432 +0000 UTC m=+795.903651504" lastFinishedPulling="2025-11-26 13:36:54.930492695 +0000 UTC m=+798.566317787" observedRunningTime="2025-11-26 13:36:55.365750933 +0000 UTC m=+799.001576035" watchObservedRunningTime="2025-11-26 13:36:55.374935373 +0000 UTC m=+799.010760455" Nov 26 13:36:55 crc kubenswrapper[4695]: I1126 13:36:55.385509 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blppl"] Nov 26 13:36:55 crc kubenswrapper[4695]: W1126 13:36:55.392358 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804c7db3_4464_44a3_8fba_55d98f498bc7.slice/crio-b6cd91e66cf1a40f6fa9ec8da85fd2f06e743e004f2aa0c64dab8f674a344986 WatchSource:0}: Error finding container b6cd91e66cf1a40f6fa9ec8da85fd2f06e743e004f2aa0c64dab8f674a344986: Status 404 returned error can't find the container with id b6cd91e66cf1a40f6fa9ec8da85fd2f06e743e004f2aa0c64dab8f674a344986 Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.258364 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4p5cl"] Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.260000 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.271549 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4p5cl"] Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.311712 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54427ae6-22d0-4333-bbc2-71746260bc34-catalog-content\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.311870 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54427ae6-22d0-4333-bbc2-71746260bc34-utilities\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.311908 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4vg\" (UniqueName: \"kubernetes.io/projected/54427ae6-22d0-4333-bbc2-71746260bc34-kube-api-access-mn4vg\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.338537 4695 generic.go:334] "Generic (PLEG): container finished" podID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerID="5430c0af09dfe408047e4da7b5fbc3ec07ff57256c1533bc840e59c913fca1a2" exitCode=0 Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.338631 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blppl" event={"ID":"804c7db3-4464-44a3-8fba-55d98f498bc7","Type":"ContainerDied","Data":"5430c0af09dfe408047e4da7b5fbc3ec07ff57256c1533bc840e59c913fca1a2"} Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.338674 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blppl" event={"ID":"804c7db3-4464-44a3-8fba-55d98f498bc7","Type":"ContainerStarted","Data":"b6cd91e66cf1a40f6fa9ec8da85fd2f06e743e004f2aa0c64dab8f674a344986"} Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.413743 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54427ae6-22d0-4333-bbc2-71746260bc34-utilities\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.413805 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn4vg\" (UniqueName: \"kubernetes.io/projected/54427ae6-22d0-4333-bbc2-71746260bc34-kube-api-access-mn4vg\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.413876 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54427ae6-22d0-4333-bbc2-71746260bc34-catalog-content\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.414320 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/54427ae6-22d0-4333-bbc2-71746260bc34-utilities\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.414337 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/54427ae6-22d0-4333-bbc2-71746260bc34-catalog-content\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.439222 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn4vg\" (UniqueName: \"kubernetes.io/projected/54427ae6-22d0-4333-bbc2-71746260bc34-kube-api-access-mn4vg\") pod \"certified-operators-4p5cl\" (UID: \"54427ae6-22d0-4333-bbc2-71746260bc34\") " pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: I1126 13:36:56.631642 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:36:56 crc kubenswrapper[4695]: E1126 13:36:56.959730 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6423e816_b0ce_4d7d_9077_a787cb8d71ba.slice/crio-conmon-f8994d4835a55eacc1bcfd234f913b9503452cc1fdd896e9efb1634ebe5e03a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6423e816_b0ce_4d7d_9077_a787cb8d71ba.slice/crio-f8994d4835a55eacc1bcfd234f913b9503452cc1fdd896e9efb1634ebe5e03a3.scope\": RecentStats: unable to find data in memory cache]" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.035506 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4p5cl"] Nov 26 13:36:57 crc kubenswrapper[4695]: W1126 13:36:57.044139 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54427ae6_22d0_4333_bbc2_71746260bc34.slice/crio-99429dd51ac1f9348b9f5110f4c98f7c0bce81a1494245130ace045ab0d63a80 WatchSource:0}: Error finding container 99429dd51ac1f9348b9f5110f4c98f7c0bce81a1494245130ace045ab0d63a80: Status 404 returned error can't find the container with id 99429dd51ac1f9348b9f5110f4c98f7c0bce81a1494245130ace045ab0d63a80 Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.271957 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9gxbw"] Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.274283 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.278838 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.281389 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gxbw"] Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.323471 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-utilities\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.323641 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j6x6\" (UniqueName: \"kubernetes.io/projected/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-kube-api-access-5j6x6\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.323723 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-catalog-content\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.351486 4695 generic.go:334] "Generic (PLEG): container finished" podID="022ac9aa-fecd-4f12-89ac-c3ed0dd88270" containerID="f79661f5dbece4c64f43b850a2d63ba14ed9121a23248becd8d915003b416cc3" exitCode=0 Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.351597 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhr75" event={"ID":"022ac9aa-fecd-4f12-89ac-c3ed0dd88270","Type":"ContainerDied","Data":"f79661f5dbece4c64f43b850a2d63ba14ed9121a23248becd8d915003b416cc3"} Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.354126 4695 generic.go:334] "Generic (PLEG): container finished" podID="6423e816-b0ce-4d7d-9077-a787cb8d71ba" containerID="f8994d4835a55eacc1bcfd234f913b9503452cc1fdd896e9efb1634ebe5e03a3" exitCode=0 Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.354209 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kklb" event={"ID":"6423e816-b0ce-4d7d-9077-a787cb8d71ba","Type":"ContainerDied","Data":"f8994d4835a55eacc1bcfd234f913b9503452cc1fdd896e9efb1634ebe5e03a3"} Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.363583 4695 generic.go:334] "Generic (PLEG): container finished" podID="54427ae6-22d0-4333-bbc2-71746260bc34" containerID="ec784c3ed3985905b6547649232d5e26c33d1ae6f53f1135a139df2fcdb70ab2" exitCode=0 Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.363632 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p5cl" event={"ID":"54427ae6-22d0-4333-bbc2-71746260bc34","Type":"ContainerDied","Data":"ec784c3ed3985905b6547649232d5e26c33d1ae6f53f1135a139df2fcdb70ab2"} Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.364541 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p5cl" event={"ID":"54427ae6-22d0-4333-bbc2-71746260bc34","Type":"ContainerStarted","Data":"99429dd51ac1f9348b9f5110f4c98f7c0bce81a1494245130ace045ab0d63a80"} Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.425075 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-utilities\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.425140 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j6x6\" (UniqueName: \"kubernetes.io/projected/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-kube-api-access-5j6x6\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.425186 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-catalog-content\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.425613 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-catalog-content\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.425947 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-utilities\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.443015 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j6x6\" (UniqueName: \"kubernetes.io/projected/9dc615cf-d3c4-4af3-a159-b90c9e970ab7-kube-api-access-5j6x6\") pod \"community-operators-9gxbw\" (UID: \"9dc615cf-d3c4-4af3-a159-b90c9e970ab7\") " pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:57 crc kubenswrapper[4695]: I1126 13:36:57.610500 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.053313 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9gxbw"] Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.371536 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8kklb" event={"ID":"6423e816-b0ce-4d7d-9077-a787cb8d71ba","Type":"ContainerStarted","Data":"10f88784cb93ccc2ea49c73377302a9ccea78f7b2e473320b1f156def0d54a13"} Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.373399 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p5cl" event={"ID":"54427ae6-22d0-4333-bbc2-71746260bc34","Type":"ContainerStarted","Data":"26b1b2077aae41d2ae27902beace047f668bc2a69c6c76fa8003042df0d16d58"} Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.376884 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blppl" event={"ID":"804c7db3-4464-44a3-8fba-55d98f498bc7","Type":"ContainerStarted","Data":"d1282466a95330171d5010e1c501c879047571db02b87ca15e55f3fcaf105c78"} Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.381418 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qhr75" event={"ID":"022ac9aa-fecd-4f12-89ac-c3ed0dd88270","Type":"ContainerStarted","Data":"cc592f1994b7947a6623a75456c036e8a816136ebe3ff4b6e84f306076f017d3"} Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.382827 4695 generic.go:334] "Generic (PLEG): container finished" podID="9dc615cf-d3c4-4af3-a159-b90c9e970ab7" containerID="205542a058b2f63a25b1e9ae5a8a9035ccb4e49953ef4a50c4c1d7f551191035" exitCode=0 Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.382863 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxbw" event={"ID":"9dc615cf-d3c4-4af3-a159-b90c9e970ab7","Type":"ContainerDied","Data":"205542a058b2f63a25b1e9ae5a8a9035ccb4e49953ef4a50c4c1d7f551191035"} Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.382881 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxbw" event={"ID":"9dc615cf-d3c4-4af3-a159-b90c9e970ab7","Type":"ContainerStarted","Data":"37a29df2a3db73492d5a4b13ae6740e04c0bfe08b28f1c2de96d13bde8a20ed4"} Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.400313 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8kklb" podStartSLOduration=2.555925358 podStartE2EDuration="5.40029758s" podCreationTimestamp="2025-11-26 13:36:53 +0000 UTC" firstStartedPulling="2025-11-26 13:36:55.322475281 +0000 UTC m=+798.958300373" lastFinishedPulling="2025-11-26 13:36:58.166847503 +0000 UTC m=+801.802672595" observedRunningTime="2025-11-26 13:36:58.394370283 +0000 UTC m=+802.030195365" watchObservedRunningTime="2025-11-26 13:36:58.40029758 +0000 UTC m=+802.036122662" Nov 26 13:36:58 crc kubenswrapper[4695]: I1126 13:36:58.422476 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qhr75" podStartSLOduration=1.8034924380000001 podStartE2EDuration="4.422454907s" podCreationTimestamp="2025-11-26 13:36:54 +0000 UTC" firstStartedPulling="2025-11-26 13:36:55.325744794 +0000 UTC m=+798.961569876" lastFinishedPulling="2025-11-26 13:36:57.944707263 +0000 UTC m=+801.580532345" observedRunningTime="2025-11-26 13:36:58.418772261 +0000 UTC m=+802.054597343" watchObservedRunningTime="2025-11-26 13:36:58.422454907 +0000 UTC m=+802.058279989" Nov 26 13:36:59 crc kubenswrapper[4695]: I1126 13:36:59.390406 4695 generic.go:334] "Generic (PLEG): container finished" podID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerID="d1282466a95330171d5010e1c501c879047571db02b87ca15e55f3fcaf105c78" exitCode=0 Nov 26 13:36:59 crc kubenswrapper[4695]: I1126 13:36:59.390447 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blppl" event={"ID":"804c7db3-4464-44a3-8fba-55d98f498bc7","Type":"ContainerDied","Data":"d1282466a95330171d5010e1c501c879047571db02b87ca15e55f3fcaf105c78"} Nov 26 13:36:59 crc kubenswrapper[4695]: I1126 13:36:59.393081 4695 generic.go:334] "Generic (PLEG): container finished" podID="54427ae6-22d0-4333-bbc2-71746260bc34" containerID="26b1b2077aae41d2ae27902beace047f668bc2a69c6c76fa8003042df0d16d58" exitCode=0 Nov 26 13:36:59 crc kubenswrapper[4695]: I1126 13:36:59.393462 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p5cl" event={"ID":"54427ae6-22d0-4333-bbc2-71746260bc34","Type":"ContainerDied","Data":"26b1b2077aae41d2ae27902beace047f668bc2a69c6c76fa8003042df0d16d58"} Nov 26 13:37:00 crc kubenswrapper[4695]: I1126 13:37:00.401965 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p5cl" event={"ID":"54427ae6-22d0-4333-bbc2-71746260bc34","Type":"ContainerStarted","Data":"602e91fef01479d6a4425e96a268b9d559161593f0fe3d335c63722ca88d73da"} Nov 26 13:37:00 crc kubenswrapper[4695]: I1126 13:37:00.404335 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blppl" event={"ID":"804c7db3-4464-44a3-8fba-55d98f498bc7","Type":"ContainerStarted","Data":"45208b2557b10a26b025280c5967c362f701acb33e7bb8730bd12c737bf56854"} Nov 26 13:37:00 crc kubenswrapper[4695]: I1126 13:37:00.406739 4695 generic.go:334] "Generic (PLEG): container finished" podID="9dc615cf-d3c4-4af3-a159-b90c9e970ab7" containerID="bd3b2e6929032766905e209a552b7a8a92a235f5417a3ceff40251277b847f89" exitCode=0 Nov 26 13:37:00 crc kubenswrapper[4695]: I1126 13:37:00.406771 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxbw" event={"ID":"9dc615cf-d3c4-4af3-a159-b90c9e970ab7","Type":"ContainerDied","Data":"bd3b2e6929032766905e209a552b7a8a92a235f5417a3ceff40251277b847f89"} Nov 26 13:37:00 crc kubenswrapper[4695]: I1126 13:37:00.430621 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4p5cl" podStartSLOduration=1.8263279190000001 podStartE2EDuration="4.430604774s" podCreationTimestamp="2025-11-26 13:36:56 +0000 UTC" firstStartedPulling="2025-11-26 13:36:57.365447614 +0000 UTC m=+801.001272706" lastFinishedPulling="2025-11-26 13:36:59.969724479 +0000 UTC m=+803.605549561" observedRunningTime="2025-11-26 13:37:00.427889088 +0000 UTC m=+804.063714170" watchObservedRunningTime="2025-11-26 13:37:00.430604774 +0000 UTC m=+804.066429856" Nov 26 13:37:00 crc kubenswrapper[4695]: I1126 13:37:00.472299 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-blppl" podStartSLOduration=2.851880462 podStartE2EDuration="6.472278765s" podCreationTimestamp="2025-11-26 13:36:54 +0000 UTC" firstStartedPulling="2025-11-26 13:36:56.34021732 +0000 UTC m=+799.976042412" lastFinishedPulling="2025-11-26 13:36:59.960615633 +0000 UTC m=+803.596440715" observedRunningTime="2025-11-26 13:37:00.467943888 +0000 UTC m=+804.103768980" watchObservedRunningTime="2025-11-26 13:37:00.472278765 +0000 UTC m=+804.108103857" Nov 26 13:37:01 crc kubenswrapper[4695]: I1126 13:37:01.414438 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9gxbw" event={"ID":"9dc615cf-d3c4-4af3-a159-b90c9e970ab7","Type":"ContainerStarted","Data":"d9e4402add7ac649c56370556e3b74aef32f83b2c6c9e25d021e21dadffb42b9"} Nov 26 13:37:01 crc kubenswrapper[4695]: I1126 13:37:01.435085 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9gxbw" podStartSLOduration=1.929888596 podStartE2EDuration="4.435067254s" podCreationTimestamp="2025-11-26 13:36:57 +0000 UTC" firstStartedPulling="2025-11-26 13:36:58.383868053 +0000 UTC m=+802.019693135" lastFinishedPulling="2025-11-26 13:37:00.889046701 +0000 UTC m=+804.524871793" observedRunningTime="2025-11-26 13:37:01.430408847 +0000 UTC m=+805.066233959" watchObservedRunningTime="2025-11-26 13:37:01.435067254 +0000 UTC m=+805.070892336" Nov 26 13:37:01 crc kubenswrapper[4695]: I1126 13:37:01.823842 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:37:01 crc kubenswrapper[4695]: I1126 13:37:01.823903 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:37:01 crc kubenswrapper[4695]: I1126 13:37:01.866955 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:37:02 crc kubenswrapper[4695]: I1126 13:37:02.459222 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.199921 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.200270 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.254471 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.481389 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8kklb" Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.652459 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62dgx"] Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.652896 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62dgx" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="registry-server" containerID="cri-o://9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9" gracePeriod=2 Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.820389 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.820437 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:37:04 crc kubenswrapper[4695]: I1126 13:37:04.861614 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:37:05 crc kubenswrapper[4695]: I1126 13:37:05.198554 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:37:05 crc kubenswrapper[4695]: I1126 13:37:05.200453 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:37:05 crc kubenswrapper[4695]: I1126 13:37:05.235402 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:37:05 crc kubenswrapper[4695]: I1126 13:37:05.517691 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qhr75" Nov 26 13:37:05 crc kubenswrapper[4695]: I1126 13:37:05.518936 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:37:06 crc kubenswrapper[4695]: I1126 13:37:06.631972 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:37:06 crc kubenswrapper[4695]: I1126 13:37:06.632300 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:37:06 crc kubenswrapper[4695]: I1126 13:37:06.672248 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:37:07 crc kubenswrapper[4695]: I1126 13:37:07.522964 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4p5cl" Nov 26 13:37:07 crc kubenswrapper[4695]: I1126 13:37:07.610932 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:37:07 crc kubenswrapper[4695]: I1126 13:37:07.611325 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:37:07 crc kubenswrapper[4695]: I1126 13:37:07.647831 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:37:08 crc kubenswrapper[4695]: I1126 13:37:08.527055 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9gxbw" Nov 26 13:37:09 crc kubenswrapper[4695]: I1126 13:37:09.051782 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blppl"] Nov 26 13:37:09 crc kubenswrapper[4695]: I1126 13:37:09.052090 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-blppl" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="registry-server" containerID="cri-o://45208b2557b10a26b025280c5967c362f701acb33e7bb8730bd12c737bf56854" gracePeriod=2 Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.397763 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.489714 4695 generic.go:334] "Generic (PLEG): container finished" podID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerID="9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9" exitCode=0 Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.489791 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62dgx" event={"ID":"9af3df73-3849-41b1-b1e6-7410f26dcd17","Type":"ContainerDied","Data":"9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9"} Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.489832 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62dgx" event={"ID":"9af3df73-3849-41b1-b1e6-7410f26dcd17","Type":"ContainerDied","Data":"3fb4df9328102bcf0b54483b0c1e5517463d6fbc7d85f95df70b1904935e0115"} Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.489855 4695 scope.go:117] "RemoveContainer" containerID="9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.489866 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62dgx" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.494707 4695 generic.go:334] "Generic (PLEG): container finished" podID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerID="45208b2557b10a26b025280c5967c362f701acb33e7bb8730bd12c737bf56854" exitCode=0 Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.494824 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blppl" event={"ID":"804c7db3-4464-44a3-8fba-55d98f498bc7","Type":"ContainerDied","Data":"45208b2557b10a26b025280c5967c362f701acb33e7bb8730bd12c737bf56854"} Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.509712 4695 scope.go:117] "RemoveContainer" containerID="de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.514950 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-catalog-content\") pod \"9af3df73-3849-41b1-b1e6-7410f26dcd17\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.515009 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-utilities\") pod \"9af3df73-3849-41b1-b1e6-7410f26dcd17\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.515103 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6msk\" (UniqueName: \"kubernetes.io/projected/9af3df73-3849-41b1-b1e6-7410f26dcd17-kube-api-access-f6msk\") pod \"9af3df73-3849-41b1-b1e6-7410f26dcd17\" (UID: \"9af3df73-3849-41b1-b1e6-7410f26dcd17\") " Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.517148 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-utilities" (OuterVolumeSpecName: "utilities") pod "9af3df73-3849-41b1-b1e6-7410f26dcd17" (UID: "9af3df73-3849-41b1-b1e6-7410f26dcd17"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.520743 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af3df73-3849-41b1-b1e6-7410f26dcd17-kube-api-access-f6msk" (OuterVolumeSpecName: "kube-api-access-f6msk") pod "9af3df73-3849-41b1-b1e6-7410f26dcd17" (UID: "9af3df73-3849-41b1-b1e6-7410f26dcd17"). InnerVolumeSpecName "kube-api-access-f6msk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.528257 4695 scope.go:117] "RemoveContainer" containerID="fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.558376 4695 scope.go:117] "RemoveContainer" containerID="9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9" Nov 26 13:37:10 crc kubenswrapper[4695]: E1126 13:37:10.558746 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9\": container with ID starting with 9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9 not found: ID does not exist" containerID="9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.558800 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9"} err="failed to get container status \"9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9\": rpc error: code = NotFound desc = could not find container \"9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9\": container with ID starting with 9a8e7cc2dade331b28924748383f5254cd66b7eeabb13609531cacb36022b7f9 not found: ID does not exist" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.558832 4695 scope.go:117] "RemoveContainer" containerID="de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e" Nov 26 13:37:10 crc kubenswrapper[4695]: E1126 13:37:10.559287 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e\": container with ID starting with de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e not found: ID does not exist" containerID="de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.559313 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e"} err="failed to get container status \"de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e\": rpc error: code = NotFound desc = could not find container \"de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e\": container with ID starting with de80b6aaefa025dc8099fb4986510949dc63770497500339aa98e0a5aed7228e not found: ID does not exist" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.559330 4695 scope.go:117] "RemoveContainer" containerID="fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91" Nov 26 13:37:10 crc kubenswrapper[4695]: E1126 13:37:10.559777 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91\": container with ID starting with fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91 not found: ID does not exist" containerID="fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.559813 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91"} err="failed to get container status \"fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91\": rpc error: code = NotFound desc = could not find container \"fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91\": container with ID starting with fab1725ee8a06fafdfc33945ecdcc6569d8c73d41c3c54caa5b3aedb56933d91 not found: ID does not exist" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.571687 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9af3df73-3849-41b1-b1e6-7410f26dcd17" (UID: "9af3df73-3849-41b1-b1e6-7410f26dcd17"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.616714 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.616752 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6msk\" (UniqueName: \"kubernetes.io/projected/9af3df73-3849-41b1-b1e6-7410f26dcd17-kube-api-access-f6msk\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.616762 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9af3df73-3849-41b1-b1e6-7410f26dcd17-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.735734 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.818628 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-utilities\") pod \"804c7db3-4464-44a3-8fba-55d98f498bc7\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.818740 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-catalog-content\") pod \"804c7db3-4464-44a3-8fba-55d98f498bc7\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.818812 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz2s7\" (UniqueName: \"kubernetes.io/projected/804c7db3-4464-44a3-8fba-55d98f498bc7-kube-api-access-qz2s7\") pod \"804c7db3-4464-44a3-8fba-55d98f498bc7\" (UID: \"804c7db3-4464-44a3-8fba-55d98f498bc7\") " Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.818960 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62dgx"] Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.819900 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-utilities" (OuterVolumeSpecName: "utilities") pod "804c7db3-4464-44a3-8fba-55d98f498bc7" (UID: "804c7db3-4464-44a3-8fba-55d98f498bc7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.823079 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62dgx"] Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.839842 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804c7db3-4464-44a3-8fba-55d98f498bc7-kube-api-access-qz2s7" (OuterVolumeSpecName: "kube-api-access-qz2s7") pod "804c7db3-4464-44a3-8fba-55d98f498bc7" (UID: "804c7db3-4464-44a3-8fba-55d98f498bc7"). InnerVolumeSpecName "kube-api-access-qz2s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.900466 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "804c7db3-4464-44a3-8fba-55d98f498bc7" (UID: "804c7db3-4464-44a3-8fba-55d98f498bc7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.920092 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.920119 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz2s7\" (UniqueName: \"kubernetes.io/projected/804c7db3-4464-44a3-8fba-55d98f498bc7-kube-api-access-qz2s7\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:10 crc kubenswrapper[4695]: I1126 13:37:10.920129 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/804c7db3-4464-44a3-8fba-55d98f498bc7-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.169875 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" path="/var/lib/kubelet/pods/9af3df73-3849-41b1-b1e6-7410f26dcd17/volumes" Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.502649 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blppl" event={"ID":"804c7db3-4464-44a3-8fba-55d98f498bc7","Type":"ContainerDied","Data":"b6cd91e66cf1a40f6fa9ec8da85fd2f06e743e004f2aa0c64dab8f674a344986"} Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.502700 4695 scope.go:117] "RemoveContainer" containerID="45208b2557b10a26b025280c5967c362f701acb33e7bb8730bd12c737bf56854" Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.503453 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blppl" Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.515295 4695 scope.go:117] "RemoveContainer" containerID="d1282466a95330171d5010e1c501c879047571db02b87ca15e55f3fcaf105c78" Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.527533 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blppl"] Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.531621 4695 scope.go:117] "RemoveContainer" containerID="5430c0af09dfe408047e4da7b5fbc3ec07ff57256c1533bc840e59c913fca1a2" Nov 26 13:37:11 crc kubenswrapper[4695]: I1126 13:37:11.537451 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-blppl"] Nov 26 13:37:13 crc kubenswrapper[4695]: I1126 13:37:13.170945 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" path="/var/lib/kubelet/pods/804c7db3-4464-44a3-8fba-55d98f498bc7/volumes" Nov 26 13:37:37 crc kubenswrapper[4695]: I1126 13:37:37.432904 4695 scope.go:117] "RemoveContainer" containerID="9dbcbd0e8241da877762ff6724e326b05c69e34f8bcb920b9504a24fa62e1b31" Nov 26 13:37:37 crc kubenswrapper[4695]: I1126 13:37:37.459974 4695 scope.go:117] "RemoveContainer" containerID="f041d1d23ff5846c8a107a05a352c20410c21fb3d90bcb1378e4f37bcb45fa15" Nov 26 13:37:37 crc kubenswrapper[4695]: I1126 13:37:37.491684 4695 scope.go:117] "RemoveContainer" containerID="904846e5ff99f2d79afb278dbc4b18d5ccdd789c5b46704daac11cc3aa944548" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.440809 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc"] Nov 26 13:37:43 crc kubenswrapper[4695]: E1126 13:37:43.441605 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="extract-content" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441622 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="extract-content" Nov 26 13:37:43 crc kubenswrapper[4695]: E1126 13:37:43.441636 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="extract-utilities" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441643 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="extract-utilities" Nov 26 13:37:43 crc kubenswrapper[4695]: E1126 13:37:43.441653 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="extract-utilities" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441661 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="extract-utilities" Nov 26 13:37:43 crc kubenswrapper[4695]: E1126 13:37:43.441677 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="registry-server" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441683 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="registry-server" Nov 26 13:37:43 crc kubenswrapper[4695]: E1126 13:37:43.441695 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="extract-content" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441701 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="extract-content" Nov 26 13:37:43 crc kubenswrapper[4695]: E1126 13:37:43.441710 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="registry-server" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441715 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="registry-server" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441811 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="804c7db3-4464-44a3-8fba-55d98f498bc7" containerName="registry-server" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.441826 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af3df73-3849-41b1-b1e6-7410f26dcd17" containerName="registry-server" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.442867 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.445711 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.453460 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc"] Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.459478 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhb2g\" (UniqueName: \"kubernetes.io/projected/5d0013c1-72c9-498b-bd7c-702efbd0ea45-kube-api-access-jhb2g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.459578 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.459612 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.560209 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.560263 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.560308 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhb2g\" (UniqueName: \"kubernetes.io/projected/5d0013c1-72c9-498b-bd7c-702efbd0ea45-kube-api-access-jhb2g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.560782 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.561069 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.588520 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhb2g\" (UniqueName: \"kubernetes.io/projected/5d0013c1-72c9-498b-bd7c-702efbd0ea45-kube-api-access-jhb2g\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:43 crc kubenswrapper[4695]: I1126 13:37:43.771243 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:44 crc kubenswrapper[4695]: I1126 13:37:44.003092 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc"] Nov 26 13:37:44 crc kubenswrapper[4695]: I1126 13:37:44.730161 4695 generic.go:334] "Generic (PLEG): container finished" podID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerID="3ad1fd8ef2ef02e94404da158065c1f7ebe4ea7353f6e27e60ceb284a6f6641b" exitCode=0 Nov 26 13:37:44 crc kubenswrapper[4695]: I1126 13:37:44.730299 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" event={"ID":"5d0013c1-72c9-498b-bd7c-702efbd0ea45","Type":"ContainerDied","Data":"3ad1fd8ef2ef02e94404da158065c1f7ebe4ea7353f6e27e60ceb284a6f6641b"} Nov 26 13:37:44 crc kubenswrapper[4695]: I1126 13:37:44.730850 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" event={"ID":"5d0013c1-72c9-498b-bd7c-702efbd0ea45","Type":"ContainerStarted","Data":"9c6015665ccbc2de43ab289847c259e1ce9d27210cc6c053d5adaf74ad773872"} Nov 26 13:37:46 crc kubenswrapper[4695]: I1126 13:37:46.746155 4695 generic.go:334] "Generic (PLEG): container finished" podID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerID="fe21add459ac3d24a5bc57c79eef725a11005afd28c44b75bd609fa01d17d43a" exitCode=0 Nov 26 13:37:46 crc kubenswrapper[4695]: I1126 13:37:46.746226 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" event={"ID":"5d0013c1-72c9-498b-bd7c-702efbd0ea45","Type":"ContainerDied","Data":"fe21add459ac3d24a5bc57c79eef725a11005afd28c44b75bd609fa01d17d43a"} Nov 26 13:37:47 crc kubenswrapper[4695]: I1126 13:37:47.758029 4695 generic.go:334] "Generic (PLEG): container finished" podID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerID="43eac609abc000cd7e05637a7c289e98026da43eac3fe75f60aa04b1a44c0aee" exitCode=0 Nov 26 13:37:47 crc kubenswrapper[4695]: I1126 13:37:47.758319 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" event={"ID":"5d0013c1-72c9-498b-bd7c-702efbd0ea45","Type":"ContainerDied","Data":"43eac609abc000cd7e05637a7c289e98026da43eac3fe75f60aa04b1a44c0aee"} Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.023177 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.142289 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhb2g\" (UniqueName: \"kubernetes.io/projected/5d0013c1-72c9-498b-bd7c-702efbd0ea45-kube-api-access-jhb2g\") pod \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.142441 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-bundle\") pod \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.142613 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-util\") pod \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\" (UID: \"5d0013c1-72c9-498b-bd7c-702efbd0ea45\") " Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.143571 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-bundle" (OuterVolumeSpecName: "bundle") pod "5d0013c1-72c9-498b-bd7c-702efbd0ea45" (UID: "5d0013c1-72c9-498b-bd7c-702efbd0ea45"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.151734 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0013c1-72c9-498b-bd7c-702efbd0ea45-kube-api-access-jhb2g" (OuterVolumeSpecName: "kube-api-access-jhb2g") pod "5d0013c1-72c9-498b-bd7c-702efbd0ea45" (UID: "5d0013c1-72c9-498b-bd7c-702efbd0ea45"). InnerVolumeSpecName "kube-api-access-jhb2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.159791 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-util" (OuterVolumeSpecName: "util") pod "5d0013c1-72c9-498b-bd7c-702efbd0ea45" (UID: "5d0013c1-72c9-498b-bd7c-702efbd0ea45"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.244479 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhb2g\" (UniqueName: \"kubernetes.io/projected/5d0013c1-72c9-498b-bd7c-702efbd0ea45-kube-api-access-jhb2g\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.244530 4695 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.244540 4695 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5d0013c1-72c9-498b-bd7c-702efbd0ea45-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.775451 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" event={"ID":"5d0013c1-72c9-498b-bd7c-702efbd0ea45","Type":"ContainerDied","Data":"9c6015665ccbc2de43ab289847c259e1ce9d27210cc6c053d5adaf74ad773872"} Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.775502 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c6015665ccbc2de43ab289847c259e1ce9d27210cc6c053d5adaf74ad773872" Nov 26 13:37:49 crc kubenswrapper[4695]: I1126 13:37:49.775579 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.120865 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ttqk2"] Nov 26 13:37:55 crc kubenswrapper[4695]: E1126 13:37:55.121783 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerName="util" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.121804 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerName="util" Nov 26 13:37:55 crc kubenswrapper[4695]: E1126 13:37:55.121831 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerName="pull" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.121839 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerName="pull" Nov 26 13:37:55 crc kubenswrapper[4695]: E1126 13:37:55.121857 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerName="extract" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.121865 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerName="extract" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.122010 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0013c1-72c9-498b-bd7c-702efbd0ea45" containerName="extract" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.122710 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.125310 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.125361 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lt8h4" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.126875 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.134424 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hcf\" (UniqueName: \"kubernetes.io/projected/326ba3c5-ae42-4131-99a0-2ef80841d58b-kube-api-access-t7hcf\") pod \"nmstate-operator-557fdffb88-ttqk2\" (UID: \"326ba3c5-ae42-4131-99a0-2ef80841d58b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.138184 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ttqk2"] Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.236240 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7hcf\" (UniqueName: \"kubernetes.io/projected/326ba3c5-ae42-4131-99a0-2ef80841d58b-kube-api-access-t7hcf\") pod \"nmstate-operator-557fdffb88-ttqk2\" (UID: \"326ba3c5-ae42-4131-99a0-2ef80841d58b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.263031 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7hcf\" (UniqueName: \"kubernetes.io/projected/326ba3c5-ae42-4131-99a0-2ef80841d58b-kube-api-access-t7hcf\") pod \"nmstate-operator-557fdffb88-ttqk2\" (UID: \"326ba3c5-ae42-4131-99a0-2ef80841d58b\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.473734 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.724956 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-ttqk2"] Nov 26 13:37:55 crc kubenswrapper[4695]: I1126 13:37:55.812653 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" event={"ID":"326ba3c5-ae42-4131-99a0-2ef80841d58b","Type":"ContainerStarted","Data":"f66f405f8a7eb703bca543a1a3b0a1a2b312bd9b95e208e60688ffe6b3e83660"} Nov 26 13:37:58 crc kubenswrapper[4695]: I1126 13:37:58.839006 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" event={"ID":"326ba3c5-ae42-4131-99a0-2ef80841d58b","Type":"ContainerStarted","Data":"bae9d57b5a9bfeb89b91391ab2cba47986bde63f9197bf2a9e88253d414a3fe9"} Nov 26 13:37:58 crc kubenswrapper[4695]: I1126 13:37:58.864082 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-ttqk2" podStartSLOduration=1.080769746 podStartE2EDuration="3.864017833s" podCreationTimestamp="2025-11-26 13:37:55 +0000 UTC" firstStartedPulling="2025-11-26 13:37:55.738912348 +0000 UTC m=+859.374737430" lastFinishedPulling="2025-11-26 13:37:58.522160435 +0000 UTC m=+862.157985517" observedRunningTime="2025-11-26 13:37:58.860518384 +0000 UTC m=+862.496343486" watchObservedRunningTime="2025-11-26 13:37:58.864017833 +0000 UTC m=+862.499842915" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.673036 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh"] Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.675326 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.676548 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs7wk\" (UniqueName: \"kubernetes.io/projected/e52265d8-4340-457f-824f-c593dc560e5b-kube-api-access-bs7wk\") pod \"nmstate-metrics-5dcf9c57c5-9trxh\" (UID: \"e52265d8-4340-457f-824f-c593dc560e5b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.677800 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-5fbr5" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.698739 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t"] Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.699984 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.702335 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.708308 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh"] Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.725678 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t"] Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.738330 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-72fwr"] Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.739676 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.778918 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/61ef0d85-8eb9-4241-958b-12c3a4b4a064-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-b7f9t\" (UID: \"61ef0d85-8eb9-4241-958b-12c3a4b4a064\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.779078 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-nmstate-lock\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.779205 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-ovs-socket\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.779401 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-dbus-socket\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.779471 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcx4l\" (UniqueName: \"kubernetes.io/projected/61ef0d85-8eb9-4241-958b-12c3a4b4a064-kube-api-access-hcx4l\") pod \"nmstate-webhook-6b89b748d8-b7f9t\" (UID: \"61ef0d85-8eb9-4241-958b-12c3a4b4a064\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.779504 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs7wk\" (UniqueName: \"kubernetes.io/projected/e52265d8-4340-457f-824f-c593dc560e5b-kube-api-access-bs7wk\") pod \"nmstate-metrics-5dcf9c57c5-9trxh\" (UID: \"e52265d8-4340-457f-824f-c593dc560e5b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.779558 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hrm\" (UniqueName: \"kubernetes.io/projected/11fb4b6b-098b-49d0-884e-460720ddfcd5-kube-api-access-w4hrm\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.808527 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs7wk\" (UniqueName: \"kubernetes.io/projected/e52265d8-4340-457f-824f-c593dc560e5b-kube-api-access-bs7wk\") pod \"nmstate-metrics-5dcf9c57c5-9trxh\" (UID: \"e52265d8-4340-457f-824f-c593dc560e5b\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.867255 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz"] Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.868326 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.873131 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rh6jd" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.873462 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.873455 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.880968 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcx4l\" (UniqueName: \"kubernetes.io/projected/61ef0d85-8eb9-4241-958b-12c3a4b4a064-kube-api-access-hcx4l\") pod \"nmstate-webhook-6b89b748d8-b7f9t\" (UID: \"61ef0d85-8eb9-4241-958b-12c3a4b4a064\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881032 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hrm\" (UniqueName: \"kubernetes.io/projected/11fb4b6b-098b-49d0-884e-460720ddfcd5-kube-api-access-w4hrm\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881064 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/61ef0d85-8eb9-4241-958b-12c3a4b4a064-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-b7f9t\" (UID: \"61ef0d85-8eb9-4241-958b-12c3a4b4a064\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881092 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c957cde-292c-4ede-a2c3-dd684372157e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881128 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-nmstate-lock\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881156 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-ovs-socket\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881180 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c957cde-292c-4ede-a2c3-dd684372157e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881217 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkzbq\" (UniqueName: \"kubernetes.io/projected/2c957cde-292c-4ede-a2c3-dd684372157e-kube-api-access-gkzbq\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881241 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-dbus-socket\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.881657 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-ovs-socket\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: E1126 13:38:03.881838 4695 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Nov 26 13:38:03 crc kubenswrapper[4695]: E1126 13:38:03.881992 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/61ef0d85-8eb9-4241-958b-12c3a4b4a064-tls-key-pair podName:61ef0d85-8eb9-4241-958b-12c3a4b4a064 nodeName:}" failed. No retries permitted until 2025-11-26 13:38:04.381964298 +0000 UTC m=+868.017789380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/61ef0d85-8eb9-4241-958b-12c3a4b4a064-tls-key-pair") pod "nmstate-webhook-6b89b748d8-b7f9t" (UID: "61ef0d85-8eb9-4241-958b-12c3a4b4a064") : secret "openshift-nmstate-webhook" not found Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.882166 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-dbus-socket\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.882889 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/11fb4b6b-098b-49d0-884e-460720ddfcd5-nmstate-lock\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.914118 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcx4l\" (UniqueName: \"kubernetes.io/projected/61ef0d85-8eb9-4241-958b-12c3a4b4a064-kube-api-access-hcx4l\") pod \"nmstate-webhook-6b89b748d8-b7f9t\" (UID: \"61ef0d85-8eb9-4241-958b-12c3a4b4a064\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.914603 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hrm\" (UniqueName: \"kubernetes.io/projected/11fb4b6b-098b-49d0-884e-460720ddfcd5-kube-api-access-w4hrm\") pod \"nmstate-handler-72fwr\" (UID: \"11fb4b6b-098b-49d0-884e-460720ddfcd5\") " pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.928979 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz"] Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.982575 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c957cde-292c-4ede-a2c3-dd684372157e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.983239 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkzbq\" (UniqueName: \"kubernetes.io/projected/2c957cde-292c-4ede-a2c3-dd684372157e-kube-api-access-gkzbq\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.983744 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c957cde-292c-4ede-a2c3-dd684372157e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.984770 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/2c957cde-292c-4ede-a2c3-dd684372157e-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.992900 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c957cde-292c-4ede-a2c3-dd684372157e-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:03 crc kubenswrapper[4695]: I1126 13:38:03.998009 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.021828 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkzbq\" (UniqueName: \"kubernetes.io/projected/2c957cde-292c-4ede-a2c3-dd684372157e-kube-api-access-gkzbq\") pod \"nmstate-console-plugin-5874bd7bc5-4jrpz\" (UID: \"2c957cde-292c-4ede-a2c3-dd684372157e\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.067574 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.081041 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-557fb798c7-8sr8k"] Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.083120 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.084313 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kr2\" (UniqueName: \"kubernetes.io/projected/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-kube-api-access-z2kr2\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.084378 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-trusted-ca-bundle\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.084427 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-oauth-config\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.084457 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-service-ca\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.084483 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-config\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.084513 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-oauth-serving-cert\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.084568 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-serving-cert\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.124210 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-557fb798c7-8sr8k"] Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.186448 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-serving-cert\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.186541 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2kr2\" (UniqueName: \"kubernetes.io/projected/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-kube-api-access-z2kr2\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.186566 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-trusted-ca-bundle\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.186603 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-oauth-config\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.186628 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-service-ca\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.186691 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-config\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.186711 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-oauth-serving-cert\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.188625 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-oauth-serving-cert\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.189213 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-trusted-ca-bundle\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.191804 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-service-ca\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.192932 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-config\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.195502 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.196487 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-oauth-config\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.198105 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-console-serving-cert\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.215893 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2kr2\" (UniqueName: \"kubernetes.io/projected/c0fabbca-f43b-4d24-aaa4-24875c97cf3f-kube-api-access-z2kr2\") pod \"console-557fb798c7-8sr8k\" (UID: \"c0fabbca-f43b-4d24-aaa4-24875c97cf3f\") " pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.301193 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh"] Nov 26 13:38:04 crc kubenswrapper[4695]: W1126 13:38:04.314478 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode52265d8_4340_457f_824f_c593dc560e5b.slice/crio-1469be0df5aab5967fe3ed9098e9299c11c3da67f7f4a61ae43aa72e2e228e69 WatchSource:0}: Error finding container 1469be0df5aab5967fe3ed9098e9299c11c3da67f7f4a61ae43aa72e2e228e69: Status 404 returned error can't find the container with id 1469be0df5aab5967fe3ed9098e9299c11c3da67f7f4a61ae43aa72e2e228e69 Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.390558 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/61ef0d85-8eb9-4241-958b-12c3a4b4a064-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-b7f9t\" (UID: \"61ef0d85-8eb9-4241-958b-12c3a4b4a064\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.394360 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/61ef0d85-8eb9-4241-958b-12c3a4b4a064-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-b7f9t\" (UID: \"61ef0d85-8eb9-4241-958b-12c3a4b4a064\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.445637 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.450698 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz"] Nov 26 13:38:04 crc kubenswrapper[4695]: W1126 13:38:04.467018 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c957cde_292c_4ede_a2c3_dd684372157e.slice/crio-3dd56ebcb0e57af55e6c4525d856d6e8000488b979ebfef21aa1b8a425b8e22c WatchSource:0}: Error finding container 3dd56ebcb0e57af55e6c4525d856d6e8000488b979ebfef21aa1b8a425b8e22c: Status 404 returned error can't find the container with id 3dd56ebcb0e57af55e6c4525d856d6e8000488b979ebfef21aa1b8a425b8e22c Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.624169 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.680252 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-557fb798c7-8sr8k"] Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.860668 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t"] Nov 26 13:38:04 crc kubenswrapper[4695]: W1126 13:38:04.882850 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61ef0d85_8eb9_4241_958b_12c3a4b4a064.slice/crio-c11aa6545eb5522b4d7e8a82e3505d1daef0f18d92fc1e7fb12decddfa474e79 WatchSource:0}: Error finding container c11aa6545eb5522b4d7e8a82e3505d1daef0f18d92fc1e7fb12decddfa474e79: Status 404 returned error can't find the container with id c11aa6545eb5522b4d7e8a82e3505d1daef0f18d92fc1e7fb12decddfa474e79 Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.897678 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-72fwr" event={"ID":"11fb4b6b-098b-49d0-884e-460720ddfcd5","Type":"ContainerStarted","Data":"b47fec9cb896350a4d3f1bd7d15724227b4f251640c5221c9c1b294f4a1c66cb"} Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.900695 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" event={"ID":"2c957cde-292c-4ede-a2c3-dd684372157e","Type":"ContainerStarted","Data":"3dd56ebcb0e57af55e6c4525d856d6e8000488b979ebfef21aa1b8a425b8e22c"} Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.905770 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-557fb798c7-8sr8k" event={"ID":"c0fabbca-f43b-4d24-aaa4-24875c97cf3f","Type":"ContainerStarted","Data":"a9fa85b23a660e50c7f44d7d061cae451de62948bb50cf64af6f24af6eb68838"} Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.905837 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-557fb798c7-8sr8k" event={"ID":"c0fabbca-f43b-4d24-aaa4-24875c97cf3f","Type":"ContainerStarted","Data":"19f53b85d53ef65edb8abe7d6fea1f266df079ac38e7dd5d4d437e4879b3b42a"} Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.908231 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" event={"ID":"e52265d8-4340-457f-824f-c593dc560e5b","Type":"ContainerStarted","Data":"1469be0df5aab5967fe3ed9098e9299c11c3da67f7f4a61ae43aa72e2e228e69"} Nov 26 13:38:04 crc kubenswrapper[4695]: I1126 13:38:04.928813 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-557fb798c7-8sr8k" podStartSLOduration=0.928785971 podStartE2EDuration="928.785971ms" podCreationTimestamp="2025-11-26 13:38:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:38:04.927853251 +0000 UTC m=+868.563678343" watchObservedRunningTime="2025-11-26 13:38:04.928785971 +0000 UTC m=+868.564611053" Nov 26 13:38:05 crc kubenswrapper[4695]: I1126 13:38:05.916260 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" event={"ID":"61ef0d85-8eb9-4241-958b-12c3a4b4a064","Type":"ContainerStarted","Data":"c11aa6545eb5522b4d7e8a82e3505d1daef0f18d92fc1e7fb12decddfa474e79"} Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.652440 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" event={"ID":"2c957cde-292c-4ede-a2c3-dd684372157e","Type":"ContainerStarted","Data":"bb73bf80a140662f082d34e489d02aee7c4fe0a1a1280704f0a7e1e6b6b8b093"} Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.655445 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" event={"ID":"e52265d8-4340-457f-824f-c593dc560e5b","Type":"ContainerStarted","Data":"c42668eb5d545d9b1016639639f20cd8ee2613ace64c98eb1e8774f913421835"} Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.657210 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" event={"ID":"61ef0d85-8eb9-4241-958b-12c3a4b4a064","Type":"ContainerStarted","Data":"6a75a0562025b4d8d04dd905ba6c0208770b638ae300409ce88318c633bd3c74"} Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.658617 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.661474 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-72fwr" event={"ID":"11fb4b6b-098b-49d0-884e-460720ddfcd5","Type":"ContainerStarted","Data":"3ea2ac348e6bfab888df29614d642bffd3f3e6f3f41683dc563186c0f49d6d68"} Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.661957 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.680618 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-4jrpz" podStartSLOduration=2.070491031 podStartE2EDuration="5.680579229s" podCreationTimestamp="2025-11-26 13:38:03 +0000 UTC" firstStartedPulling="2025-11-26 13:38:04.470586822 +0000 UTC m=+868.106411904" lastFinishedPulling="2025-11-26 13:38:08.08067502 +0000 UTC m=+871.716500102" observedRunningTime="2025-11-26 13:38:08.673670391 +0000 UTC m=+872.309495473" watchObservedRunningTime="2025-11-26 13:38:08.680579229 +0000 UTC m=+872.316404311" Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.746371 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-72fwr" podStartSLOduration=1.786358127 podStartE2EDuration="5.746319117s" podCreationTimestamp="2025-11-26 13:38:03 +0000 UTC" firstStartedPulling="2025-11-26 13:38:04.13594172 +0000 UTC m=+867.771766802" lastFinishedPulling="2025-11-26 13:38:08.09590271 +0000 UTC m=+871.731727792" observedRunningTime="2025-11-26 13:38:08.743799788 +0000 UTC m=+872.379624880" watchObservedRunningTime="2025-11-26 13:38:08.746319117 +0000 UTC m=+872.382144199" Nov 26 13:38:08 crc kubenswrapper[4695]: I1126 13:38:08.749835 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" podStartSLOduration=2.562039648 podStartE2EDuration="5.749819658s" podCreationTimestamp="2025-11-26 13:38:03 +0000 UTC" firstStartedPulling="2025-11-26 13:38:04.891941641 +0000 UTC m=+868.527766733" lastFinishedPulling="2025-11-26 13:38:08.079721661 +0000 UTC m=+871.715546743" observedRunningTime="2025-11-26 13:38:08.722199729 +0000 UTC m=+872.358024821" watchObservedRunningTime="2025-11-26 13:38:08.749819658 +0000 UTC m=+872.385644740" Nov 26 13:38:10 crc kubenswrapper[4695]: I1126 13:38:10.680762 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" event={"ID":"e52265d8-4340-457f-824f-c593dc560e5b","Type":"ContainerStarted","Data":"9fc3efa7f4cb7a9661b11cc4a3fdb495d44fc7251afe9784013b6d80fd79ff54"} Nov 26 13:38:14 crc kubenswrapper[4695]: I1126 13:38:14.097531 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-72fwr" Nov 26 13:38:14 crc kubenswrapper[4695]: I1126 13:38:14.126568 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-9trxh" podStartSLOduration=4.957795373 podStartE2EDuration="11.126546983s" podCreationTimestamp="2025-11-26 13:38:03 +0000 UTC" firstStartedPulling="2025-11-26 13:38:04.317922837 +0000 UTC m=+867.953747919" lastFinishedPulling="2025-11-26 13:38:10.486674437 +0000 UTC m=+874.122499529" observedRunningTime="2025-11-26 13:38:10.700392642 +0000 UTC m=+874.336217724" watchObservedRunningTime="2025-11-26 13:38:14.126546983 +0000 UTC m=+877.762372065" Nov 26 13:38:14 crc kubenswrapper[4695]: I1126 13:38:14.446033 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:14 crc kubenswrapper[4695]: I1126 13:38:14.446149 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:14 crc kubenswrapper[4695]: I1126 13:38:14.452233 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:14 crc kubenswrapper[4695]: I1126 13:38:14.722641 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-557fb798c7-8sr8k" Nov 26 13:38:14 crc kubenswrapper[4695]: I1126 13:38:14.782283 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4fqcc"] Nov 26 13:38:24 crc kubenswrapper[4695]: I1126 13:38:24.632317 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-b7f9t" Nov 26 13:38:36 crc kubenswrapper[4695]: I1126 13:38:36.397191 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:38:36 crc kubenswrapper[4695]: I1126 13:38:36.398495 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:38:38 crc kubenswrapper[4695]: I1126 13:38:38.834728 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd"] Nov 26 13:38:38 crc kubenswrapper[4695]: I1126 13:38:38.837001 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:38 crc kubenswrapper[4695]: I1126 13:38:38.839688 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 26 13:38:38 crc kubenswrapper[4695]: I1126 13:38:38.851032 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd"] Nov 26 13:38:38 crc kubenswrapper[4695]: I1126 13:38:38.918899 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:38 crc kubenswrapper[4695]: I1126 13:38:38.919055 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:38 crc kubenswrapper[4695]: I1126 13:38:38.919108 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96p6f\" (UniqueName: \"kubernetes.io/projected/98eff931-5636-4fab-b319-883648640d79-kube-api-access-96p6f\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.020382 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.020777 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.020911 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96p6f\" (UniqueName: \"kubernetes.io/projected/98eff931-5636-4fab-b319-883648640d79-kube-api-access-96p6f\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.021253 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.021254 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.042699 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96p6f\" (UniqueName: \"kubernetes.io/projected/98eff931-5636-4fab-b319-883648640d79-kube-api-access-96p6f\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.159236 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.603958 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd"] Nov 26 13:38:39 crc kubenswrapper[4695]: W1126 13:38:39.613934 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98eff931_5636_4fab_b319_883648640d79.slice/crio-783df18e25590144e8edadf65af6e2e39653860a7ae5cac8e8882cfd92759dd9 WatchSource:0}: Error finding container 783df18e25590144e8edadf65af6e2e39653860a7ae5cac8e8882cfd92759dd9: Status 404 returned error can't find the container with id 783df18e25590144e8edadf65af6e2e39653860a7ae5cac8e8882cfd92759dd9 Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.826401 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4fqcc" podUID="d84d0827-d7fe-42eb-adbe-eda35247c26c" containerName="console" containerID="cri-o://472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029" gracePeriod=15 Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.881656 4695 generic.go:334] "Generic (PLEG): container finished" podID="98eff931-5636-4fab-b319-883648640d79" containerID="77aada451fddd4188e6593348b95ca8abb5a474de0318714ffd3ca9feb63ebd7" exitCode=0 Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.881720 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" event={"ID":"98eff931-5636-4fab-b319-883648640d79","Type":"ContainerDied","Data":"77aada451fddd4188e6593348b95ca8abb5a474de0318714ffd3ca9feb63ebd7"} Nov 26 13:38:39 crc kubenswrapper[4695]: I1126 13:38:39.882518 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" event={"ID":"98eff931-5636-4fab-b319-883648640d79","Type":"ContainerStarted","Data":"783df18e25590144e8edadf65af6e2e39653860a7ae5cac8e8882cfd92759dd9"} Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.192685 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4fqcc_d84d0827-d7fe-42eb-adbe-eda35247c26c/console/0.log" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.192749 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.342312 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-service-ca\") pod \"d84d0827-d7fe-42eb-adbe-eda35247c26c\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.342419 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-trusted-ca-bundle\") pod \"d84d0827-d7fe-42eb-adbe-eda35247c26c\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.342477 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-oauth-serving-cert\") pod \"d84d0827-d7fe-42eb-adbe-eda35247c26c\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.342507 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-oauth-config\") pod \"d84d0827-d7fe-42eb-adbe-eda35247c26c\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.342527 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-serving-cert\") pod \"d84d0827-d7fe-42eb-adbe-eda35247c26c\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.342573 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mxhn\" (UniqueName: \"kubernetes.io/projected/d84d0827-d7fe-42eb-adbe-eda35247c26c-kube-api-access-9mxhn\") pod \"d84d0827-d7fe-42eb-adbe-eda35247c26c\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.342621 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-config\") pod \"d84d0827-d7fe-42eb-adbe-eda35247c26c\" (UID: \"d84d0827-d7fe-42eb-adbe-eda35247c26c\") " Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.343383 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-service-ca" (OuterVolumeSpecName: "service-ca") pod "d84d0827-d7fe-42eb-adbe-eda35247c26c" (UID: "d84d0827-d7fe-42eb-adbe-eda35247c26c"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.343418 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d84d0827-d7fe-42eb-adbe-eda35247c26c" (UID: "d84d0827-d7fe-42eb-adbe-eda35247c26c"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.344972 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-config" (OuterVolumeSpecName: "console-config") pod "d84d0827-d7fe-42eb-adbe-eda35247c26c" (UID: "d84d0827-d7fe-42eb-adbe-eda35247c26c"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.345008 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "d84d0827-d7fe-42eb-adbe-eda35247c26c" (UID: "d84d0827-d7fe-42eb-adbe-eda35247c26c"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.353454 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84d0827-d7fe-42eb-adbe-eda35247c26c-kube-api-access-9mxhn" (OuterVolumeSpecName: "kube-api-access-9mxhn") pod "d84d0827-d7fe-42eb-adbe-eda35247c26c" (UID: "d84d0827-d7fe-42eb-adbe-eda35247c26c"). InnerVolumeSpecName "kube-api-access-9mxhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.353794 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "d84d0827-d7fe-42eb-adbe-eda35247c26c" (UID: "d84d0827-d7fe-42eb-adbe-eda35247c26c"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.353980 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "d84d0827-d7fe-42eb-adbe-eda35247c26c" (UID: "d84d0827-d7fe-42eb-adbe-eda35247c26c"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.444314 4695 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-service-ca\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.444388 4695 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.444410 4695 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.444427 4695 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.444446 4695 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.444463 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mxhn\" (UniqueName: \"kubernetes.io/projected/d84d0827-d7fe-42eb-adbe-eda35247c26c-kube-api-access-9mxhn\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.444482 4695 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d84d0827-d7fe-42eb-adbe-eda35247c26c-console-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.890238 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4fqcc_d84d0827-d7fe-42eb-adbe-eda35247c26c/console/0.log" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.890623 4695 generic.go:334] "Generic (PLEG): container finished" podID="d84d0827-d7fe-42eb-adbe-eda35247c26c" containerID="472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029" exitCode=2 Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.890658 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4fqcc" event={"ID":"d84d0827-d7fe-42eb-adbe-eda35247c26c","Type":"ContainerDied","Data":"472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029"} Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.890694 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4fqcc" event={"ID":"d84d0827-d7fe-42eb-adbe-eda35247c26c","Type":"ContainerDied","Data":"cc898b040f3ee7eac3ea023613ca8cff77844e181b83e735728ea39810c34273"} Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.890722 4695 scope.go:117] "RemoveContainer" containerID="472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.890767 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4fqcc" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.922682 4695 scope.go:117] "RemoveContainer" containerID="472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.923371 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4fqcc"] Nov 26 13:38:40 crc kubenswrapper[4695]: E1126 13:38:40.923395 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029\": container with ID starting with 472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029 not found: ID does not exist" containerID="472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.923461 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029"} err="failed to get container status \"472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029\": rpc error: code = NotFound desc = could not find container \"472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029\": container with ID starting with 472fa4961abe76a96cefa5daa64a951c13eef87291d815c6fe1b502227c5a029 not found: ID does not exist" Nov 26 13:38:40 crc kubenswrapper[4695]: I1126 13:38:40.927080 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4fqcc"] Nov 26 13:38:41 crc kubenswrapper[4695]: I1126 13:38:41.173132 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84d0827-d7fe-42eb-adbe-eda35247c26c" path="/var/lib/kubelet/pods/d84d0827-d7fe-42eb-adbe-eda35247c26c/volumes" Nov 26 13:38:41 crc kubenswrapper[4695]: I1126 13:38:41.902561 4695 generic.go:334] "Generic (PLEG): container finished" podID="98eff931-5636-4fab-b319-883648640d79" containerID="8b7dcfb024476377175db2d99bf432541a62b5d4ccea1dccd9d105b54dae401c" exitCode=0 Nov 26 13:38:41 crc kubenswrapper[4695]: I1126 13:38:41.902626 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" event={"ID":"98eff931-5636-4fab-b319-883648640d79","Type":"ContainerDied","Data":"8b7dcfb024476377175db2d99bf432541a62b5d4ccea1dccd9d105b54dae401c"} Nov 26 13:38:42 crc kubenswrapper[4695]: I1126 13:38:42.919977 4695 generic.go:334] "Generic (PLEG): container finished" podID="98eff931-5636-4fab-b319-883648640d79" containerID="184fff7c7324e3a610997ad724c391423f88841f8f431e5fec786d101329292c" exitCode=0 Nov 26 13:38:42 crc kubenswrapper[4695]: I1126 13:38:42.920081 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" event={"ID":"98eff931-5636-4fab-b319-883648640d79","Type":"ContainerDied","Data":"184fff7c7324e3a610997ad724c391423f88841f8f431e5fec786d101329292c"} Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.228908 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.308214 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96p6f\" (UniqueName: \"kubernetes.io/projected/98eff931-5636-4fab-b319-883648640d79-kube-api-access-96p6f\") pod \"98eff931-5636-4fab-b319-883648640d79\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.308396 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-util\") pod \"98eff931-5636-4fab-b319-883648640d79\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.308447 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-bundle\") pod \"98eff931-5636-4fab-b319-883648640d79\" (UID: \"98eff931-5636-4fab-b319-883648640d79\") " Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.310123 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-bundle" (OuterVolumeSpecName: "bundle") pod "98eff931-5636-4fab-b319-883648640d79" (UID: "98eff931-5636-4fab-b319-883648640d79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.315386 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98eff931-5636-4fab-b319-883648640d79-kube-api-access-96p6f" (OuterVolumeSpecName: "kube-api-access-96p6f") pod "98eff931-5636-4fab-b319-883648640d79" (UID: "98eff931-5636-4fab-b319-883648640d79"). InnerVolumeSpecName "kube-api-access-96p6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.333257 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-util" (OuterVolumeSpecName: "util") pod "98eff931-5636-4fab-b319-883648640d79" (UID: "98eff931-5636-4fab-b319-883648640d79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.410819 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96p6f\" (UniqueName: \"kubernetes.io/projected/98eff931-5636-4fab-b319-883648640d79-kube-api-access-96p6f\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.410866 4695 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.410883 4695 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/98eff931-5636-4fab-b319-883648640d79-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.937222 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" event={"ID":"98eff931-5636-4fab-b319-883648640d79","Type":"ContainerDied","Data":"783df18e25590144e8edadf65af6e2e39653860a7ae5cac8e8882cfd92759dd9"} Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.937266 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="783df18e25590144e8edadf65af6e2e39653860a7ae5cac8e8882cfd92759dd9" Nov 26 13:38:44 crc kubenswrapper[4695]: I1126 13:38:44.937270 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.223734 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk"] Nov 26 13:38:56 crc kubenswrapper[4695]: E1126 13:38:56.224921 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84d0827-d7fe-42eb-adbe-eda35247c26c" containerName="console" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.224939 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84d0827-d7fe-42eb-adbe-eda35247c26c" containerName="console" Nov 26 13:38:56 crc kubenswrapper[4695]: E1126 13:38:56.224953 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eff931-5636-4fab-b319-883648640d79" containerName="pull" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.224959 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eff931-5636-4fab-b319-883648640d79" containerName="pull" Nov 26 13:38:56 crc kubenswrapper[4695]: E1126 13:38:56.224973 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eff931-5636-4fab-b319-883648640d79" containerName="extract" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.224979 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eff931-5636-4fab-b319-883648640d79" containerName="extract" Nov 26 13:38:56 crc kubenswrapper[4695]: E1126 13:38:56.224989 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98eff931-5636-4fab-b319-883648640d79" containerName="util" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.224994 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="98eff931-5636-4fab-b319-883648640d79" containerName="util" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.225103 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84d0827-d7fe-42eb-adbe-eda35247c26c" containerName="console" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.225121 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="98eff931-5636-4fab-b319-883648640d79" containerName="extract" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.225719 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.228617 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-nqlkx" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.228785 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.228858 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.228921 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.230890 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.247088 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk"] Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.397295 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhks\" (UniqueName: \"kubernetes.io/projected/ae146f87-799d-4013-954b-7b3df8521851-kube-api-access-9lhks\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.397426 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae146f87-799d-4013-954b-7b3df8521851-apiservice-cert\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.397517 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae146f87-799d-4013-954b-7b3df8521851-webhook-cert\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.463616 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-868bccc468-mss5n"] Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.464558 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.466530 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.467266 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-6dzrr" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.469856 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.478719 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-868bccc468-mss5n"] Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.498898 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhks\" (UniqueName: \"kubernetes.io/projected/ae146f87-799d-4013-954b-7b3df8521851-kube-api-access-9lhks\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.498961 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae146f87-799d-4013-954b-7b3df8521851-apiservice-cert\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.499028 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae146f87-799d-4013-954b-7b3df8521851-webhook-cert\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.509129 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae146f87-799d-4013-954b-7b3df8521851-apiservice-cert\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.521864 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae146f87-799d-4013-954b-7b3df8521851-webhook-cert\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.528716 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhks\" (UniqueName: \"kubernetes.io/projected/ae146f87-799d-4013-954b-7b3df8521851-kube-api-access-9lhks\") pod \"metallb-operator-controller-manager-7d6f5fdd48-hdrhk\" (UID: \"ae146f87-799d-4013-954b-7b3df8521851\") " pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.547399 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.600534 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e92050c8-b486-4429-ad39-f39f154ff06f-apiservice-cert\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.600602 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf7tf\" (UniqueName: \"kubernetes.io/projected/e92050c8-b486-4429-ad39-f39f154ff06f-kube-api-access-qf7tf\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.600623 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e92050c8-b486-4429-ad39-f39f154ff06f-webhook-cert\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.703582 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qf7tf\" (UniqueName: \"kubernetes.io/projected/e92050c8-b486-4429-ad39-f39f154ff06f-kube-api-access-qf7tf\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.709525 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e92050c8-b486-4429-ad39-f39f154ff06f-webhook-cert\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.709863 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e92050c8-b486-4429-ad39-f39f154ff06f-apiservice-cert\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.715571 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e92050c8-b486-4429-ad39-f39f154ff06f-apiservice-cert\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.716037 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e92050c8-b486-4429-ad39-f39f154ff06f-webhook-cert\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.726013 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qf7tf\" (UniqueName: \"kubernetes.io/projected/e92050c8-b486-4429-ad39-f39f154ff06f-kube-api-access-qf7tf\") pod \"metallb-operator-webhook-server-868bccc468-mss5n\" (UID: \"e92050c8-b486-4429-ad39-f39f154ff06f\") " pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.778808 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:38:56 crc kubenswrapper[4695]: I1126 13:38:56.845251 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk"] Nov 26 13:38:56 crc kubenswrapper[4695]: W1126 13:38:56.856086 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae146f87_799d_4013_954b_7b3df8521851.slice/crio-55481a4afa42c329f27e1f5111079da59a169b64edc75e54d3e6863aa1ea92bb WatchSource:0}: Error finding container 55481a4afa42c329f27e1f5111079da59a169b64edc75e54d3e6863aa1ea92bb: Status 404 returned error can't find the container with id 55481a4afa42c329f27e1f5111079da59a169b64edc75e54d3e6863aa1ea92bb Nov 26 13:38:57 crc kubenswrapper[4695]: I1126 13:38:57.030697 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-868bccc468-mss5n"] Nov 26 13:38:57 crc kubenswrapper[4695]: I1126 13:38:57.031314 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" event={"ID":"ae146f87-799d-4013-954b-7b3df8521851","Type":"ContainerStarted","Data":"55481a4afa42c329f27e1f5111079da59a169b64edc75e54d3e6863aa1ea92bb"} Nov 26 13:38:57 crc kubenswrapper[4695]: W1126 13:38:57.041607 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode92050c8_b486_4429_ad39_f39f154ff06f.slice/crio-01e5b09dbd3385de6c9df025fe06515f9179e2e28d4d475f8aa1dcf559c11de9 WatchSource:0}: Error finding container 01e5b09dbd3385de6c9df025fe06515f9179e2e28d4d475f8aa1dcf559c11de9: Status 404 returned error can't find the container with id 01e5b09dbd3385de6c9df025fe06515f9179e2e28d4d475f8aa1dcf559c11de9 Nov 26 13:38:58 crc kubenswrapper[4695]: I1126 13:38:58.040529 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" event={"ID":"e92050c8-b486-4429-ad39-f39f154ff06f","Type":"ContainerStarted","Data":"01e5b09dbd3385de6c9df025fe06515f9179e2e28d4d475f8aa1dcf559c11de9"} Nov 26 13:39:03 crc kubenswrapper[4695]: I1126 13:39:03.082652 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" event={"ID":"e92050c8-b486-4429-ad39-f39f154ff06f","Type":"ContainerStarted","Data":"a280c238ba9da33e9398a04d22b40d56e7e9a6240e1a8825514e56a2bcde47e0"} Nov 26 13:39:03 crc kubenswrapper[4695]: I1126 13:39:03.083430 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:39:03 crc kubenswrapper[4695]: I1126 13:39:03.086934 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" event={"ID":"ae146f87-799d-4013-954b-7b3df8521851","Type":"ContainerStarted","Data":"1c5891c050c1e377397193c30f5da0e450bdfb4ad8147110495b12682a52f247"} Nov 26 13:39:03 crc kubenswrapper[4695]: I1126 13:39:03.087097 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:39:03 crc kubenswrapper[4695]: I1126 13:39:03.101227 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" podStartSLOduration=1.971245833 podStartE2EDuration="7.101200076s" podCreationTimestamp="2025-11-26 13:38:56 +0000 UTC" firstStartedPulling="2025-11-26 13:38:57.046147584 +0000 UTC m=+920.681972666" lastFinishedPulling="2025-11-26 13:39:02.176101827 +0000 UTC m=+925.811926909" observedRunningTime="2025-11-26 13:39:03.101084833 +0000 UTC m=+926.736909915" watchObservedRunningTime="2025-11-26 13:39:03.101200076 +0000 UTC m=+926.737025168" Nov 26 13:39:03 crc kubenswrapper[4695]: I1126 13:39:03.129499 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" podStartSLOduration=1.83156026 podStartE2EDuration="7.129476332s" podCreationTimestamp="2025-11-26 13:38:56 +0000 UTC" firstStartedPulling="2025-11-26 13:38:56.859370029 +0000 UTC m=+920.495195111" lastFinishedPulling="2025-11-26 13:39:02.157286091 +0000 UTC m=+925.793111183" observedRunningTime="2025-11-26 13:39:03.128866432 +0000 UTC m=+926.764691534" watchObservedRunningTime="2025-11-26 13:39:03.129476332 +0000 UTC m=+926.765301424" Nov 26 13:39:06 crc kubenswrapper[4695]: I1126 13:39:06.397292 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:39:06 crc kubenswrapper[4695]: I1126 13:39:06.397903 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:39:16 crc kubenswrapper[4695]: I1126 13:39:16.788322 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-868bccc468-mss5n" Nov 26 13:39:36 crc kubenswrapper[4695]: I1126 13:39:36.396597 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:39:36 crc kubenswrapper[4695]: I1126 13:39:36.399586 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:39:36 crc kubenswrapper[4695]: I1126 13:39:36.399823 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:39:36 crc kubenswrapper[4695]: I1126 13:39:36.400896 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5db1765d388a4f8bc2fce4e005f968080dbee8df86cf9973d4a9582128ebd4df"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:39:36 crc kubenswrapper[4695]: I1126 13:39:36.401110 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://5db1765d388a4f8bc2fce4e005f968080dbee8df86cf9973d4a9582128ebd4df" gracePeriod=600 Nov 26 13:39:36 crc kubenswrapper[4695]: I1126 13:39:36.551014 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d6f5fdd48-hdrhk" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.296466 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tdrxb"] Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.299547 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.301646 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.302478 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.302572 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hdj6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.312903 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p"] Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.314143 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.318048 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.324914 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="5db1765d388a4f8bc2fce4e005f968080dbee8df86cf9973d4a9582128ebd4df" exitCode=0 Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.324969 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"5db1765d388a4f8bc2fce4e005f968080dbee8df86cf9973d4a9582128ebd4df"} Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.325329 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"d5ada8ee218c5b0e5eb69bc5a99a08479c0f37839560e96a3518e7444c465fbd"} Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.325410 4695 scope.go:117] "RemoveContainer" containerID="84b7005908bcbacbd029ad98335f5091a83ffcb04f71e35adcdf1c55fac6fce2" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340137 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-conf\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340216 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340240 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-startup\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340268 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6whd\" (UniqueName: \"kubernetes.io/projected/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-kube-api-access-n6whd\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340308 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-reloader\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340431 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics-certs\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340515 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b894f31-fadd-4034-a93a-d7767eb59691-cert\") pod \"frr-k8s-webhook-server-6998585d5-5sn6p\" (UID: \"9b894f31-fadd-4034-a93a-d7767eb59691\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340553 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blkbk\" (UniqueName: \"kubernetes.io/projected/9b894f31-fadd-4034-a93a-d7767eb59691-kube-api-access-blkbk\") pod \"frr-k8s-webhook-server-6998585d5-5sn6p\" (UID: \"9b894f31-fadd-4034-a93a-d7767eb59691\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.340623 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-sockets\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.344034 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p"] Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.416799 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4b6f5"] Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.417666 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.420228 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.420319 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-lvlhp" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.422839 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.432546 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-7h5k9"] Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.433998 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.434309 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.436219 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.443779 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gdb5\" (UniqueName: \"kubernetes.io/projected/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-kube-api-access-7gdb5\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.443837 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-reloader\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.443866 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-metallb-excludel2\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.443906 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics-certs\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.443937 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b894f31-fadd-4034-a93a-d7767eb59691-cert\") pod \"frr-k8s-webhook-server-6998585d5-5sn6p\" (UID: \"9b894f31-fadd-4034-a93a-d7767eb59691\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.443970 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blkbk\" (UniqueName: \"kubernetes.io/projected/9b894f31-fadd-4034-a93a-d7767eb59691-kube-api-access-blkbk\") pod \"frr-k8s-webhook-server-6998585d5-5sn6p\" (UID: \"9b894f31-fadd-4034-a93a-d7767eb59691\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.443995 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444036 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-metrics-certs\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444066 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-sockets\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444107 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8wqs\" (UniqueName: \"kubernetes.io/projected/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-kube-api-access-c8wqs\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444148 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-conf\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: E1126 13:39:37.444161 4695 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444181 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-metrics-certs\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444207 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-cert\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: E1126 13:39:37.444240 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics-certs podName:c37afac1-e7c9-40b4-b458-6c9f84dffdf9 nodeName:}" failed. No retries permitted until 2025-11-26 13:39:37.944215759 +0000 UTC m=+961.580041051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics-certs") pod "frr-k8s-tdrxb" (UID: "c37afac1-e7c9-40b4-b458-6c9f84dffdf9") : secret "frr-k8s-certs-secret" not found Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444267 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444300 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-startup\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444332 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6whd\" (UniqueName: \"kubernetes.io/projected/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-kube-api-access-n6whd\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444562 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-reloader\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.444997 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-sockets\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.445122 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-conf\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.445206 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.445855 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-frr-startup\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.456499 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9b894f31-fadd-4034-a93a-d7767eb59691-cert\") pod \"frr-k8s-webhook-server-6998585d5-5sn6p\" (UID: \"9b894f31-fadd-4034-a93a-d7767eb59691\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.477404 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blkbk\" (UniqueName: \"kubernetes.io/projected/9b894f31-fadd-4034-a93a-d7767eb59691-kube-api-access-blkbk\") pod \"frr-k8s-webhook-server-6998585d5-5sn6p\" (UID: \"9b894f31-fadd-4034-a93a-d7767eb59691\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.487034 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6whd\" (UniqueName: \"kubernetes.io/projected/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-kube-api-access-n6whd\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.514590 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-7h5k9"] Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.545633 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8wqs\" (UniqueName: \"kubernetes.io/projected/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-kube-api-access-c8wqs\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.545720 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-metrics-certs\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.545746 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-cert\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.545778 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gdb5\" (UniqueName: \"kubernetes.io/projected/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-kube-api-access-7gdb5\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.545797 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-metallb-excludel2\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.545848 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.545874 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-metrics-certs\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: E1126 13:39:37.546040 4695 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 26 13:39:37 crc kubenswrapper[4695]: E1126 13:39:37.546098 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-metrics-certs podName:8c2eab4a-4615-4dce-a0bb-e3316d4e2be9 nodeName:}" failed. No retries permitted until 2025-11-26 13:39:38.046081144 +0000 UTC m=+961.681906226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-metrics-certs") pod "controller-6c7b4b5f48-7h5k9" (UID: "8c2eab4a-4615-4dce-a0bb-e3316d4e2be9") : secret "controller-certs-secret" not found Nov 26 13:39:37 crc kubenswrapper[4695]: E1126 13:39:37.552586 4695 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 26 13:39:37 crc kubenswrapper[4695]: E1126 13:39:37.552709 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist podName:bc3d7aa1-897c-44e4-a493-4a80ef1142fe nodeName:}" failed. No retries permitted until 2025-11-26 13:39:38.052679234 +0000 UTC m=+961.688504516 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist") pod "speaker-4b6f5" (UID: "bc3d7aa1-897c-44e4-a493-4a80ef1142fe") : secret "metallb-memberlist" not found Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.553180 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-metallb-excludel2\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.556757 4695 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.560976 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-metrics-certs\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.575301 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-cert\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.578442 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gdb5\" (UniqueName: \"kubernetes.io/projected/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-kube-api-access-7gdb5\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.579520 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8wqs\" (UniqueName: \"kubernetes.io/projected/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-kube-api-access-c8wqs\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.630429 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.893684 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p"] Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.955710 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics-certs\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:37 crc kubenswrapper[4695]: I1126 13:39:37.964093 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c37afac1-e7c9-40b4-b458-6c9f84dffdf9-metrics-certs\") pod \"frr-k8s-tdrxb\" (UID: \"c37afac1-e7c9-40b4-b458-6c9f84dffdf9\") " pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:38 crc kubenswrapper[4695]: I1126 13:39:38.057767 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:38 crc kubenswrapper[4695]: I1126 13:39:38.057859 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-metrics-certs\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:38 crc kubenswrapper[4695]: E1126 13:39:38.058851 4695 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 26 13:39:38 crc kubenswrapper[4695]: E1126 13:39:38.058974 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist podName:bc3d7aa1-897c-44e4-a493-4a80ef1142fe nodeName:}" failed. No retries permitted until 2025-11-26 13:39:39.058937348 +0000 UTC m=+962.694762440 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist") pod "speaker-4b6f5" (UID: "bc3d7aa1-897c-44e4-a493-4a80ef1142fe") : secret "metallb-memberlist" not found Nov 26 13:39:38 crc kubenswrapper[4695]: I1126 13:39:38.062153 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8c2eab4a-4615-4dce-a0bb-e3316d4e2be9-metrics-certs\") pod \"controller-6c7b4b5f48-7h5k9\" (UID: \"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9\") " pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:38 crc kubenswrapper[4695]: I1126 13:39:38.216561 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:38 crc kubenswrapper[4695]: I1126 13:39:38.335720 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" event={"ID":"9b894f31-fadd-4034-a93a-d7767eb59691","Type":"ContainerStarted","Data":"6227e6d97a444e4121939fff08daec015af08ee75c3d1760bda14f0c4e4b1570"} Nov 26 13:39:38 crc kubenswrapper[4695]: I1126 13:39:38.357255 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:38 crc kubenswrapper[4695]: I1126 13:39:38.596057 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-7h5k9"] Nov 26 13:39:38 crc kubenswrapper[4695]: W1126 13:39:38.616578 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2eab4a_4615_4dce_a0bb_e3316d4e2be9.slice/crio-a67ae7aa6a043a277c6d633cec3c1bf2f5bf7c5efac6cfecd49a1ca4d7c5df40 WatchSource:0}: Error finding container a67ae7aa6a043a277c6d633cec3c1bf2f5bf7c5efac6cfecd49a1ca4d7c5df40: Status 404 returned error can't find the container with id a67ae7aa6a043a277c6d633cec3c1bf2f5bf7c5efac6cfecd49a1ca4d7c5df40 Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.077558 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.089903 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/bc3d7aa1-897c-44e4-a493-4a80ef1142fe-memberlist\") pod \"speaker-4b6f5\" (UID: \"bc3d7aa1-897c-44e4-a493-4a80ef1142fe\") " pod="metallb-system/speaker-4b6f5" Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.235397 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4b6f5" Nov 26 13:39:39 crc kubenswrapper[4695]: W1126 13:39:39.263662 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc3d7aa1_897c_44e4_a493_4a80ef1142fe.slice/crio-819059ad710a219b7669238689881c8e2113edfd672b9dd6f79bfdba72cbe677 WatchSource:0}: Error finding container 819059ad710a219b7669238689881c8e2113edfd672b9dd6f79bfdba72cbe677: Status 404 returned error can't find the container with id 819059ad710a219b7669238689881c8e2113edfd672b9dd6f79bfdba72cbe677 Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.347857 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4b6f5" event={"ID":"bc3d7aa1-897c-44e4-a493-4a80ef1142fe","Type":"ContainerStarted","Data":"819059ad710a219b7669238689881c8e2113edfd672b9dd6f79bfdba72cbe677"} Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.351613 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-7h5k9" event={"ID":"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9","Type":"ContainerStarted","Data":"eb71b0444e93bd01849af52baebedfce41745d114c05bc4350b1257fd6b37784"} Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.351676 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-7h5k9" event={"ID":"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9","Type":"ContainerStarted","Data":"6b2262f286de96868acf1d24d0cf2a206327bcda9e9763de65d086bd1254b735"} Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.351693 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-7h5k9" event={"ID":"8c2eab4a-4615-4dce-a0bb-e3316d4e2be9","Type":"ContainerStarted","Data":"a67ae7aa6a043a277c6d633cec3c1bf2f5bf7c5efac6cfecd49a1ca4d7c5df40"} Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.351757 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.353425 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerStarted","Data":"beba23deff21236ac8871b5fd5e677f9741987a798cc9ff436336b745c98d03b"} Nov 26 13:39:39 crc kubenswrapper[4695]: I1126 13:39:39.386257 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-7h5k9" podStartSLOduration=2.386227515 podStartE2EDuration="2.386227515s" podCreationTimestamp="2025-11-26 13:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:39.377301822 +0000 UTC m=+963.013126924" watchObservedRunningTime="2025-11-26 13:39:39.386227515 +0000 UTC m=+963.022052627" Nov 26 13:39:40 crc kubenswrapper[4695]: I1126 13:39:40.370466 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4b6f5" event={"ID":"bc3d7aa1-897c-44e4-a493-4a80ef1142fe","Type":"ContainerStarted","Data":"dbeecbacbf930a77e1a9ec2100bd3dfc725ad701eebcdf403a2e3be9c073e31f"} Nov 26 13:39:40 crc kubenswrapper[4695]: I1126 13:39:40.370982 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4b6f5" event={"ID":"bc3d7aa1-897c-44e4-a493-4a80ef1142fe","Type":"ContainerStarted","Data":"940697c8b35d5814960d743ecd68ba65c2e88cd7222c290a4b4b66fef3a5caa7"} Nov 26 13:39:40 crc kubenswrapper[4695]: I1126 13:39:40.371008 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4b6f5" Nov 26 13:39:40 crc kubenswrapper[4695]: I1126 13:39:40.398661 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4b6f5" podStartSLOduration=3.398633899 podStartE2EDuration="3.398633899s" podCreationTimestamp="2025-11-26 13:39:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:39:40.398442804 +0000 UTC m=+964.034267886" watchObservedRunningTime="2025-11-26 13:39:40.398633899 +0000 UTC m=+964.034458981" Nov 26 13:39:46 crc kubenswrapper[4695]: I1126 13:39:46.438015 4695 generic.go:334] "Generic (PLEG): container finished" podID="c37afac1-e7c9-40b4-b458-6c9f84dffdf9" containerID="db0709ea1b13b9b10e9a1208cf13b902184cc6467f827c41b114a77a26700ca2" exitCode=0 Nov 26 13:39:46 crc kubenswrapper[4695]: I1126 13:39:46.438072 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerDied","Data":"db0709ea1b13b9b10e9a1208cf13b902184cc6467f827c41b114a77a26700ca2"} Nov 26 13:39:46 crc kubenswrapper[4695]: I1126 13:39:46.441645 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" event={"ID":"9b894f31-fadd-4034-a93a-d7767eb59691","Type":"ContainerStarted","Data":"32e60ed6f4fc2b65e0cae62a2c787359babd7e0566b968cf290e0943dc7c1d2a"} Nov 26 13:39:46 crc kubenswrapper[4695]: I1126 13:39:46.441785 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:46 crc kubenswrapper[4695]: I1126 13:39:46.505154 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" podStartSLOduration=1.860207715 podStartE2EDuration="9.505124161s" podCreationTimestamp="2025-11-26 13:39:37 +0000 UTC" firstStartedPulling="2025-11-26 13:39:37.905975143 +0000 UTC m=+961.541800225" lastFinishedPulling="2025-11-26 13:39:45.550891579 +0000 UTC m=+969.186716671" observedRunningTime="2025-11-26 13:39:46.496687384 +0000 UTC m=+970.132512486" watchObservedRunningTime="2025-11-26 13:39:46.505124161 +0000 UTC m=+970.140949243" Nov 26 13:39:47 crc kubenswrapper[4695]: I1126 13:39:47.455887 4695 generic.go:334] "Generic (PLEG): container finished" podID="c37afac1-e7c9-40b4-b458-6c9f84dffdf9" containerID="bc9af13f076bccc679819d226b56ad29696be1a38c27bdc66fdd10d8118b58cf" exitCode=0 Nov 26 13:39:47 crc kubenswrapper[4695]: I1126 13:39:47.456021 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerDied","Data":"bc9af13f076bccc679819d226b56ad29696be1a38c27bdc66fdd10d8118b58cf"} Nov 26 13:39:48 crc kubenswrapper[4695]: I1126 13:39:48.362383 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-7h5k9" Nov 26 13:39:48 crc kubenswrapper[4695]: I1126 13:39:48.465264 4695 generic.go:334] "Generic (PLEG): container finished" podID="c37afac1-e7c9-40b4-b458-6c9f84dffdf9" containerID="40f8e18e4e538fe3ff88c8309e795bb513ccc73cda4d4ead8418716d43acf880" exitCode=0 Nov 26 13:39:48 crc kubenswrapper[4695]: I1126 13:39:48.465332 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerDied","Data":"40f8e18e4e538fe3ff88c8309e795bb513ccc73cda4d4ead8418716d43acf880"} Nov 26 13:39:49 crc kubenswrapper[4695]: I1126 13:39:49.241766 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4b6f5" Nov 26 13:39:49 crc kubenswrapper[4695]: I1126 13:39:49.488900 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerStarted","Data":"423da647ba1d6730ac745253dc2ccc16b08e247b5cd6330b45c50305376dd3bb"} Nov 26 13:39:49 crc kubenswrapper[4695]: I1126 13:39:49.489341 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerStarted","Data":"50c4af6825f32be1d7145356c82b4fa9c4913d46dad1a54a962ca2539ab47878"} Nov 26 13:39:49 crc kubenswrapper[4695]: I1126 13:39:49.489374 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerStarted","Data":"b02ca4fd2abad533db2eecd9046d2ebb98c80225170446894c48b638c2e130f6"} Nov 26 13:39:49 crc kubenswrapper[4695]: I1126 13:39:49.489385 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerStarted","Data":"1e70b83038e65d633d727fa95656bde2d16b1ed5d107581fe25dbdf5c9f8ffdd"} Nov 26 13:39:50 crc kubenswrapper[4695]: I1126 13:39:50.500394 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerStarted","Data":"0a014549d03c8b2a5b2e860a265631b2d662a2e6eb5eabdc21dee69a89488b9f"} Nov 26 13:39:50 crc kubenswrapper[4695]: I1126 13:39:50.500465 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tdrxb" event={"ID":"c37afac1-e7c9-40b4-b458-6c9f84dffdf9","Type":"ContainerStarted","Data":"605075ee088a8f12a0de5b07894ed5a679be0615f9e3ddb77d07bd405dcbc278"} Nov 26 13:39:50 crc kubenswrapper[4695]: I1126 13:39:50.500830 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:50 crc kubenswrapper[4695]: I1126 13:39:50.528052 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tdrxb" podStartSLOduration=6.325983343 podStartE2EDuration="13.528019872s" podCreationTimestamp="2025-11-26 13:39:37 +0000 UTC" firstStartedPulling="2025-11-26 13:39:38.376430823 +0000 UTC m=+962.012255905" lastFinishedPulling="2025-11-26 13:39:45.578467352 +0000 UTC m=+969.214292434" observedRunningTime="2025-11-26 13:39:50.523173689 +0000 UTC m=+974.158998771" watchObservedRunningTime="2025-11-26 13:39:50.528019872 +0000 UTC m=+974.163844954" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.226534 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-9c4f8"] Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.227584 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9c4f8" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.231182 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rv9lv" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.231389 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.233986 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.251431 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9c4f8"] Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.397875 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b77m\" (UniqueName: \"kubernetes.io/projected/1ca46187-fdcf-4aae-a183-2a52f7e5d99e-kube-api-access-8b77m\") pod \"openstack-operator-index-9c4f8\" (UID: \"1ca46187-fdcf-4aae-a183-2a52f7e5d99e\") " pod="openstack-operators/openstack-operator-index-9c4f8" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.500056 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b77m\" (UniqueName: \"kubernetes.io/projected/1ca46187-fdcf-4aae-a183-2a52f7e5d99e-kube-api-access-8b77m\") pod \"openstack-operator-index-9c4f8\" (UID: \"1ca46187-fdcf-4aae-a183-2a52f7e5d99e\") " pod="openstack-operators/openstack-operator-index-9c4f8" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.527218 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b77m\" (UniqueName: \"kubernetes.io/projected/1ca46187-fdcf-4aae-a183-2a52f7e5d99e-kube-api-access-8b77m\") pod \"openstack-operator-index-9c4f8\" (UID: \"1ca46187-fdcf-4aae-a183-2a52f7e5d99e\") " pod="openstack-operators/openstack-operator-index-9c4f8" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.545427 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9c4f8" Nov 26 13:39:52 crc kubenswrapper[4695]: I1126 13:39:52.920469 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-9c4f8"] Nov 26 13:39:53 crc kubenswrapper[4695]: I1126 13:39:53.217580 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:53 crc kubenswrapper[4695]: I1126 13:39:53.268921 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:53 crc kubenswrapper[4695]: I1126 13:39:53.522617 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9c4f8" event={"ID":"1ca46187-fdcf-4aae-a183-2a52f7e5d99e","Type":"ContainerStarted","Data":"ebe7b76da298911ae0df1ba9196e299529b6de814be9e7b7912c6e7e7264cf3d"} Nov 26 13:39:55 crc kubenswrapper[4695]: I1126 13:39:55.386488 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9c4f8"] Nov 26 13:39:55 crc kubenswrapper[4695]: I1126 13:39:55.994102 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-r56l2"] Nov 26 13:39:55 crc kubenswrapper[4695]: I1126 13:39:55.995297 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.004604 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r56l2"] Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.163214 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlzw6\" (UniqueName: \"kubernetes.io/projected/281f751a-55f6-4753-8014-8e52bd983a45-kube-api-access-zlzw6\") pod \"openstack-operator-index-r56l2\" (UID: \"281f751a-55f6-4753-8014-8e52bd983a45\") " pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.265554 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlzw6\" (UniqueName: \"kubernetes.io/projected/281f751a-55f6-4753-8014-8e52bd983a45-kube-api-access-zlzw6\") pod \"openstack-operator-index-r56l2\" (UID: \"281f751a-55f6-4753-8014-8e52bd983a45\") " pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.290865 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlzw6\" (UniqueName: \"kubernetes.io/projected/281f751a-55f6-4753-8014-8e52bd983a45-kube-api-access-zlzw6\") pod \"openstack-operator-index-r56l2\" (UID: \"281f751a-55f6-4753-8014-8e52bd983a45\") " pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.318526 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.586705 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9c4f8" event={"ID":"1ca46187-fdcf-4aae-a183-2a52f7e5d99e","Type":"ContainerStarted","Data":"792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a"} Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.587084 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-9c4f8" podUID="1ca46187-fdcf-4aae-a183-2a52f7e5d99e" containerName="registry-server" containerID="cri-o://792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a" gracePeriod=2 Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.627553 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-9c4f8" podStartSLOduration=1.993220942 podStartE2EDuration="4.627465142s" podCreationTimestamp="2025-11-26 13:39:52 +0000 UTC" firstStartedPulling="2025-11-26 13:39:52.928474339 +0000 UTC m=+976.564299451" lastFinishedPulling="2025-11-26 13:39:55.562718569 +0000 UTC m=+979.198543651" observedRunningTime="2025-11-26 13:39:56.617288519 +0000 UTC m=+980.253113601" watchObservedRunningTime="2025-11-26 13:39:56.627465142 +0000 UTC m=+980.263290214" Nov 26 13:39:56 crc kubenswrapper[4695]: I1126 13:39:56.875520 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-r56l2"] Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.118370 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9c4f8" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.282175 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b77m\" (UniqueName: \"kubernetes.io/projected/1ca46187-fdcf-4aae-a183-2a52f7e5d99e-kube-api-access-8b77m\") pod \"1ca46187-fdcf-4aae-a183-2a52f7e5d99e\" (UID: \"1ca46187-fdcf-4aae-a183-2a52f7e5d99e\") " Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.289012 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ca46187-fdcf-4aae-a183-2a52f7e5d99e-kube-api-access-8b77m" (OuterVolumeSpecName: "kube-api-access-8b77m") pod "1ca46187-fdcf-4aae-a183-2a52f7e5d99e" (UID: "1ca46187-fdcf-4aae-a183-2a52f7e5d99e"). InnerVolumeSpecName "kube-api-access-8b77m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.383605 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b77m\" (UniqueName: \"kubernetes.io/projected/1ca46187-fdcf-4aae-a183-2a52f7e5d99e-kube-api-access-8b77m\") on node \"crc\" DevicePath \"\"" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.595858 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r56l2" event={"ID":"281f751a-55f6-4753-8014-8e52bd983a45","Type":"ContainerStarted","Data":"d133a429c1d1f752fe4c7ea8ef36d14305e190c0a104b5f352dd7cda51dfe89c"} Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.598010 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-r56l2" event={"ID":"281f751a-55f6-4753-8014-8e52bd983a45","Type":"ContainerStarted","Data":"57f035d152c90c2535dd825827c3c5e5dcc70904793367cdc78bae482e4c9cb7"} Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.598249 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9c4f8" event={"ID":"1ca46187-fdcf-4aae-a183-2a52f7e5d99e","Type":"ContainerDied","Data":"792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a"} Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.598543 4695 scope.go:117] "RemoveContainer" containerID="792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.597262 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-9c4f8" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.597203 4695 generic.go:334] "Generic (PLEG): container finished" podID="1ca46187-fdcf-4aae-a183-2a52f7e5d99e" containerID="792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a" exitCode=0 Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.598935 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-9c4f8" event={"ID":"1ca46187-fdcf-4aae-a183-2a52f7e5d99e","Type":"ContainerDied","Data":"ebe7b76da298911ae0df1ba9196e299529b6de814be9e7b7912c6e7e7264cf3d"} Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.619701 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-r56l2" podStartSLOduration=2.558912511 podStartE2EDuration="2.619677656s" podCreationTimestamp="2025-11-26 13:39:55 +0000 UTC" firstStartedPulling="2025-11-26 13:39:56.899530688 +0000 UTC m=+980.535355780" lastFinishedPulling="2025-11-26 13:39:56.960295843 +0000 UTC m=+980.596120925" observedRunningTime="2025-11-26 13:39:57.617740015 +0000 UTC m=+981.253565097" watchObservedRunningTime="2025-11-26 13:39:57.619677656 +0000 UTC m=+981.255502738" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.619943 4695 scope.go:117] "RemoveContainer" containerID="792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a" Nov 26 13:39:57 crc kubenswrapper[4695]: E1126 13:39:57.622130 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a\": container with ID starting with 792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a not found: ID does not exist" containerID="792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.622193 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a"} err="failed to get container status \"792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a\": rpc error: code = NotFound desc = could not find container \"792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a\": container with ID starting with 792644227d828c7ddba5c21265a49605aec7267ae143bfc17041f4c1372b597a not found: ID does not exist" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.648531 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-5sn6p" Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.654123 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-9c4f8"] Nov 26 13:39:57 crc kubenswrapper[4695]: I1126 13:39:57.661299 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-9c4f8"] Nov 26 13:39:58 crc kubenswrapper[4695]: I1126 13:39:58.221358 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tdrxb" Nov 26 13:39:59 crc kubenswrapper[4695]: I1126 13:39:59.172153 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ca46187-fdcf-4aae-a183-2a52f7e5d99e" path="/var/lib/kubelet/pods/1ca46187-fdcf-4aae-a183-2a52f7e5d99e/volumes" Nov 26 13:40:06 crc kubenswrapper[4695]: I1126 13:40:06.318910 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:40:06 crc kubenswrapper[4695]: I1126 13:40:06.319877 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:40:06 crc kubenswrapper[4695]: I1126 13:40:06.353722 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:40:06 crc kubenswrapper[4695]: I1126 13:40:06.724845 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-r56l2" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.435843 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74"] Nov 26 13:40:08 crc kubenswrapper[4695]: E1126 13:40:08.436490 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ca46187-fdcf-4aae-a183-2a52f7e5d99e" containerName="registry-server" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.436502 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ca46187-fdcf-4aae-a183-2a52f7e5d99e" containerName="registry-server" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.436622 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ca46187-fdcf-4aae-a183-2a52f7e5d99e" containerName="registry-server" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.437525 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.446708 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2psv6" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.502038 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74"] Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.582961 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-bundle\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.583038 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gqkw\" (UniqueName: \"kubernetes.io/projected/74319b7a-3ea1-4750-8c18-4e4578472276-kube-api-access-6gqkw\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.583065 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-util\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.684870 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-bundle\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.684945 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gqkw\" (UniqueName: \"kubernetes.io/projected/74319b7a-3ea1-4750-8c18-4e4578472276-kube-api-access-6gqkw\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.685012 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-util\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.685984 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-util\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.686496 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-bundle\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.716758 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gqkw\" (UniqueName: \"kubernetes.io/projected/74319b7a-3ea1-4750-8c18-4e4578472276-kube-api-access-6gqkw\") pod \"067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:08 crc kubenswrapper[4695]: I1126 13:40:08.758505 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:09 crc kubenswrapper[4695]: I1126 13:40:09.186616 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74"] Nov 26 13:40:09 crc kubenswrapper[4695]: W1126 13:40:09.195029 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74319b7a_3ea1_4750_8c18_4e4578472276.slice/crio-fe162ead3123d517563774098ac77fe3ee5b6d1bc47062201f37fb0296bb9a93 WatchSource:0}: Error finding container fe162ead3123d517563774098ac77fe3ee5b6d1bc47062201f37fb0296bb9a93: Status 404 returned error can't find the container with id fe162ead3123d517563774098ac77fe3ee5b6d1bc47062201f37fb0296bb9a93 Nov 26 13:40:09 crc kubenswrapper[4695]: I1126 13:40:09.867240 4695 generic.go:334] "Generic (PLEG): container finished" podID="74319b7a-3ea1-4750-8c18-4e4578472276" containerID="b1204cbfcd4d6fafe0754f78d865adfaf965beaa2dcb2f50b4587fbb5fa63479" exitCode=0 Nov 26 13:40:09 crc kubenswrapper[4695]: I1126 13:40:09.867391 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" event={"ID":"74319b7a-3ea1-4750-8c18-4e4578472276","Type":"ContainerDied","Data":"b1204cbfcd4d6fafe0754f78d865adfaf965beaa2dcb2f50b4587fbb5fa63479"} Nov 26 13:40:09 crc kubenswrapper[4695]: I1126 13:40:09.868403 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" event={"ID":"74319b7a-3ea1-4750-8c18-4e4578472276","Type":"ContainerStarted","Data":"fe162ead3123d517563774098ac77fe3ee5b6d1bc47062201f37fb0296bb9a93"} Nov 26 13:40:10 crc kubenswrapper[4695]: I1126 13:40:10.879978 4695 generic.go:334] "Generic (PLEG): container finished" podID="74319b7a-3ea1-4750-8c18-4e4578472276" containerID="090294070f5298f64691d5d450afc42822626b4cf6082eade994c01cc00116fe" exitCode=0 Nov 26 13:40:10 crc kubenswrapper[4695]: I1126 13:40:10.880127 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" event={"ID":"74319b7a-3ea1-4750-8c18-4e4578472276","Type":"ContainerDied","Data":"090294070f5298f64691d5d450afc42822626b4cf6082eade994c01cc00116fe"} Nov 26 13:40:11 crc kubenswrapper[4695]: I1126 13:40:11.890729 4695 generic.go:334] "Generic (PLEG): container finished" podID="74319b7a-3ea1-4750-8c18-4e4578472276" containerID="0c0b20b17bae7b8d3056ead801bcb8ddac6322d89dcebd84e724ab978772eeed" exitCode=0 Nov 26 13:40:11 crc kubenswrapper[4695]: I1126 13:40:11.891014 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" event={"ID":"74319b7a-3ea1-4750-8c18-4e4578472276","Type":"ContainerDied","Data":"0c0b20b17bae7b8d3056ead801bcb8ddac6322d89dcebd84e724ab978772eeed"} Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.200849 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.360542 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-bundle\") pod \"74319b7a-3ea1-4750-8c18-4e4578472276\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.360722 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-util\") pod \"74319b7a-3ea1-4750-8c18-4e4578472276\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.360772 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gqkw\" (UniqueName: \"kubernetes.io/projected/74319b7a-3ea1-4750-8c18-4e4578472276-kube-api-access-6gqkw\") pod \"74319b7a-3ea1-4750-8c18-4e4578472276\" (UID: \"74319b7a-3ea1-4750-8c18-4e4578472276\") " Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.362568 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-bundle" (OuterVolumeSpecName: "bundle") pod "74319b7a-3ea1-4750-8c18-4e4578472276" (UID: "74319b7a-3ea1-4750-8c18-4e4578472276"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.371791 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74319b7a-3ea1-4750-8c18-4e4578472276-kube-api-access-6gqkw" (OuterVolumeSpecName: "kube-api-access-6gqkw") pod "74319b7a-3ea1-4750-8c18-4e4578472276" (UID: "74319b7a-3ea1-4750-8c18-4e4578472276"). InnerVolumeSpecName "kube-api-access-6gqkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.374457 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-util" (OuterVolumeSpecName: "util") pod "74319b7a-3ea1-4750-8c18-4e4578472276" (UID: "74319b7a-3ea1-4750-8c18-4e4578472276"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.462792 4695 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.462839 4695 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/74319b7a-3ea1-4750-8c18-4e4578472276-util\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.462858 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gqkw\" (UniqueName: \"kubernetes.io/projected/74319b7a-3ea1-4750-8c18-4e4578472276-kube-api-access-6gqkw\") on node \"crc\" DevicePath \"\"" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.910624 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" event={"ID":"74319b7a-3ea1-4750-8c18-4e4578472276","Type":"ContainerDied","Data":"fe162ead3123d517563774098ac77fe3ee5b6d1bc47062201f37fb0296bb9a93"} Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.910699 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe162ead3123d517563774098ac77fe3ee5b6d1bc47062201f37fb0296bb9a93" Nov 26 13:40:13 crc kubenswrapper[4695]: I1126 13:40:13.910767 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.487394 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj"] Nov 26 13:40:20 crc kubenswrapper[4695]: E1126 13:40:20.488312 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74319b7a-3ea1-4750-8c18-4e4578472276" containerName="pull" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.488329 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="74319b7a-3ea1-4750-8c18-4e4578472276" containerName="pull" Nov 26 13:40:20 crc kubenswrapper[4695]: E1126 13:40:20.488339 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74319b7a-3ea1-4750-8c18-4e4578472276" containerName="extract" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.488363 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="74319b7a-3ea1-4750-8c18-4e4578472276" containerName="extract" Nov 26 13:40:20 crc kubenswrapper[4695]: E1126 13:40:20.488383 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74319b7a-3ea1-4750-8c18-4e4578472276" containerName="util" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.488394 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="74319b7a-3ea1-4750-8c18-4e4578472276" containerName="util" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.488528 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="74319b7a-3ea1-4750-8c18-4e4578472276" containerName="extract" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.489041 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.493744 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-vmqcs" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.533942 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj"] Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.678210 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tld7t\" (UniqueName: \"kubernetes.io/projected/301c123c-e342-4fda-b713-03954d29dd4a-kube-api-access-tld7t\") pod \"openstack-operator-controller-operator-78fd744894-tc5nj\" (UID: \"301c123c-e342-4fda-b713-03954d29dd4a\") " pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.779383 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tld7t\" (UniqueName: \"kubernetes.io/projected/301c123c-e342-4fda-b713-03954d29dd4a-kube-api-access-tld7t\") pod \"openstack-operator-controller-operator-78fd744894-tc5nj\" (UID: \"301c123c-e342-4fda-b713-03954d29dd4a\") " pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" Nov 26 13:40:20 crc kubenswrapper[4695]: I1126 13:40:20.812216 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tld7t\" (UniqueName: \"kubernetes.io/projected/301c123c-e342-4fda-b713-03954d29dd4a-kube-api-access-tld7t\") pod \"openstack-operator-controller-operator-78fd744894-tc5nj\" (UID: \"301c123c-e342-4fda-b713-03954d29dd4a\") " pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" Nov 26 13:40:21 crc kubenswrapper[4695]: I1126 13:40:21.111486 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" Nov 26 13:40:21 crc kubenswrapper[4695]: I1126 13:40:21.401760 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj"] Nov 26 13:40:21 crc kubenswrapper[4695]: I1126 13:40:21.983333 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" event={"ID":"301c123c-e342-4fda-b713-03954d29dd4a","Type":"ContainerStarted","Data":"22c99da2e79fa02318ee433897204de367ac90af56f90b39cd183d7bf08129de"} Nov 26 13:40:26 crc kubenswrapper[4695]: I1126 13:40:26.013139 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" event={"ID":"301c123c-e342-4fda-b713-03954d29dd4a","Type":"ContainerStarted","Data":"2513970f585c09c2e7820d1e91eefbd841d6bf941ceea07aa97fe38cf725aff2"} Nov 26 13:40:26 crc kubenswrapper[4695]: I1126 13:40:26.013735 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" Nov 26 13:40:26 crc kubenswrapper[4695]: I1126 13:40:26.044834 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" podStartSLOduration=2.087374138 podStartE2EDuration="6.044807906s" podCreationTimestamp="2025-11-26 13:40:20 +0000 UTC" firstStartedPulling="2025-11-26 13:40:21.413483514 +0000 UTC m=+1005.049308596" lastFinishedPulling="2025-11-26 13:40:25.370917282 +0000 UTC m=+1009.006742364" observedRunningTime="2025-11-26 13:40:26.042596976 +0000 UTC m=+1009.678422078" watchObservedRunningTime="2025-11-26 13:40:26.044807906 +0000 UTC m=+1009.680633008" Nov 26 13:40:31 crc kubenswrapper[4695]: I1126 13:40:31.116979 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-78fd744894-tc5nj" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.003895 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.005826 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.009047 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-hzttw" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.014070 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.015666 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.018994 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-vbqpx" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.026074 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.035584 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.041177 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-twcbz"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.042486 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.044646 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-pt9mk" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.055068 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-twcbz"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.078548 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c5wd\" (UniqueName: \"kubernetes.io/projected/dbf58d06-6729-4a3e-8682-641649f1ecd2-kube-api-access-9c5wd\") pod \"designate-operator-controller-manager-955677c94-twcbz\" (UID: \"dbf58d06-6729-4a3e-8682-641649f1ecd2\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.078641 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9njzg\" (UniqueName: \"kubernetes.io/projected/868435aa-9f77-46df-af13-ae24b16dee14-kube-api-access-9njzg\") pod \"cinder-operator-controller-manager-6b7f75547b-qbchs\" (UID: \"868435aa-9f77-46df-af13-ae24b16dee14\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.078667 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltgjd\" (UniqueName: \"kubernetes.io/projected/a487eafc-c65d-4ce9-b801-e489882a4dfa-kube-api-access-ltgjd\") pod \"barbican-operator-controller-manager-7b64f4fb85-xtp7h\" (UID: \"a487eafc-c65d-4ce9-b801-e489882a4dfa\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.082007 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.083310 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.095565 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-8dg6t" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.106367 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.107890 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.110392 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-xt4wj" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.126446 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.136521 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.137895 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.143186 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-4xjc4" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.143400 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.173489 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.180206 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.180292 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c5wd\" (UniqueName: \"kubernetes.io/projected/dbf58d06-6729-4a3e-8682-641649f1ecd2-kube-api-access-9c5wd\") pod \"designate-operator-controller-manager-955677c94-twcbz\" (UID: \"dbf58d06-6729-4a3e-8682-641649f1ecd2\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.180323 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km7kw\" (UniqueName: \"kubernetes.io/projected/c66a73b5-1103-497f-87a6-70d964111fc9-kube-api-access-km7kw\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.180391 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4zl5\" (UniqueName: \"kubernetes.io/projected/490f30b9-4b79-4a35-a77d-44c8a90b5dcf-kube-api-access-d4zl5\") pod \"glance-operator-controller-manager-589cbd6b5b-6t784\" (UID: \"490f30b9-4b79-4a35-a77d-44c8a90b5dcf\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.180424 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k98fp\" (UniqueName: \"kubernetes.io/projected/f76ff94a-b97d-4bbc-bc03-3b8df6d35095-kube-api-access-k98fp\") pod \"heat-operator-controller-manager-5b77f656f-s22qh\" (UID: \"f76ff94a-b97d-4bbc-bc03-3b8df6d35095\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.180463 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9njzg\" (UniqueName: \"kubernetes.io/projected/868435aa-9f77-46df-af13-ae24b16dee14-kube-api-access-9njzg\") pod \"cinder-operator-controller-manager-6b7f75547b-qbchs\" (UID: \"868435aa-9f77-46df-af13-ae24b16dee14\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.180489 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltgjd\" (UniqueName: \"kubernetes.io/projected/a487eafc-c65d-4ce9-b801-e489882a4dfa-kube-api-access-ltgjd\") pod \"barbican-operator-controller-manager-7b64f4fb85-xtp7h\" (UID: \"a487eafc-c65d-4ce9-b801-e489882a4dfa\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.191409 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.192752 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.222262 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-8hx4s" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.230435 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.266580 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltgjd\" (UniqueName: \"kubernetes.io/projected/a487eafc-c65d-4ce9-b801-e489882a4dfa-kube-api-access-ltgjd\") pod \"barbican-operator-controller-manager-7b64f4fb85-xtp7h\" (UID: \"a487eafc-c65d-4ce9-b801-e489882a4dfa\") " pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.268223 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9njzg\" (UniqueName: \"kubernetes.io/projected/868435aa-9f77-46df-af13-ae24b16dee14-kube-api-access-9njzg\") pod \"cinder-operator-controller-manager-6b7f75547b-qbchs\" (UID: \"868435aa-9f77-46df-af13-ae24b16dee14\") " pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.287163 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c5wd\" (UniqueName: \"kubernetes.io/projected/dbf58d06-6729-4a3e-8682-641649f1ecd2-kube-api-access-9c5wd\") pod \"designate-operator-controller-manager-955677c94-twcbz\" (UID: \"dbf58d06-6729-4a3e-8682-641649f1ecd2\") " pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.315543 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km7kw\" (UniqueName: \"kubernetes.io/projected/c66a73b5-1103-497f-87a6-70d964111fc9-kube-api-access-km7kw\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.316310 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4zl5\" (UniqueName: \"kubernetes.io/projected/490f30b9-4b79-4a35-a77d-44c8a90b5dcf-kube-api-access-d4zl5\") pod \"glance-operator-controller-manager-589cbd6b5b-6t784\" (UID: \"490f30b9-4b79-4a35-a77d-44c8a90b5dcf\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.316365 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k98fp\" (UniqueName: \"kubernetes.io/projected/f76ff94a-b97d-4bbc-bc03-3b8df6d35095-kube-api-access-k98fp\") pod \"heat-operator-controller-manager-5b77f656f-s22qh\" (UID: \"f76ff94a-b97d-4bbc-bc03-3b8df6d35095\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.317672 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:50 crc kubenswrapper[4695]: E1126 13:40:50.317813 4695 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:50 crc kubenswrapper[4695]: E1126 13:40:50.317861 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert podName:c66a73b5-1103-497f-87a6-70d964111fc9 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:50.817843457 +0000 UTC m=+1034.453668539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert") pod "infra-operator-controller-manager-57548d458d-v2jnr" (UID: "c66a73b5-1103-497f-87a6-70d964111fc9") : secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.329000 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.337897 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.352436 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.353977 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.358295 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.359653 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9r7b4" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.366143 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km7kw\" (UniqueName: \"kubernetes.io/projected/c66a73b5-1103-497f-87a6-70d964111fc9-kube-api-access-km7kw\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.370791 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k98fp\" (UniqueName: \"kubernetes.io/projected/f76ff94a-b97d-4bbc-bc03-3b8df6d35095-kube-api-access-k98fp\") pod \"heat-operator-controller-manager-5b77f656f-s22qh\" (UID: \"f76ff94a-b97d-4bbc-bc03-3b8df6d35095\") " pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.381724 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.391292 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.392882 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.393879 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4zl5\" (UniqueName: \"kubernetes.io/projected/490f30b9-4b79-4a35-a77d-44c8a90b5dcf-kube-api-access-d4zl5\") pod \"glance-operator-controller-manager-589cbd6b5b-6t784\" (UID: \"490f30b9-4b79-4a35-a77d-44c8a90b5dcf\") " pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.409963 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-nqhkh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.414028 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.418874 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.420252 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfrfj\" (UniqueName: \"kubernetes.io/projected/d71ecb02-382d-4fde-b349-343c97f769fd-kube-api-access-bfrfj\") pod \"horizon-operator-controller-manager-5d494799bf-mljjd\" (UID: \"d71ecb02-382d-4fde-b349-343c97f769fd\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.428675 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.461149 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.482570 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.484132 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.497149 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sxtwv" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.524656 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nslv2\" (UniqueName: \"kubernetes.io/projected/d84948ad-d0e9-4d86-97a7-1a0d9e13d858-kube-api-access-nslv2\") pod \"ironic-operator-controller-manager-67cb4dc6d4-vk52w\" (UID: \"d84948ad-d0e9-4d86-97a7-1a0d9e13d858\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.524725 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547zq\" (UniqueName: \"kubernetes.io/projected/fcca3fad-5da8-4242-894e-9dd5917f3828-kube-api-access-547zq\") pod \"manila-operator-controller-manager-5d499bf58b-n8tkz\" (UID: \"fcca3fad-5da8-4242-894e-9dd5917f3828\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.524822 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfrfj\" (UniqueName: \"kubernetes.io/projected/d71ecb02-382d-4fde-b349-343c97f769fd-kube-api-access-bfrfj\") pod \"horizon-operator-controller-manager-5d494799bf-mljjd\" (UID: \"d71ecb02-382d-4fde-b349-343c97f769fd\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.524900 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcgq4\" (UniqueName: \"kubernetes.io/projected/8d102669-be66-4bc6-8328-3e7d8a66f4c1-kube-api-access-qcgq4\") pod \"keystone-operator-controller-manager-7b4567c7cf-ndnf7\" (UID: \"8d102669-be66-4bc6-8328-3e7d8a66f4c1\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.525880 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.544181 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.546322 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.556130 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nmc7p" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.570965 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.576950 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.584503 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4dmq7" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.635789 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.638305 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcgq4\" (UniqueName: \"kubernetes.io/projected/8d102669-be66-4bc6-8328-3e7d8a66f4c1-kube-api-access-qcgq4\") pod \"keystone-operator-controller-manager-7b4567c7cf-ndnf7\" (UID: \"8d102669-be66-4bc6-8328-3e7d8a66f4c1\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.638478 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nslv2\" (UniqueName: \"kubernetes.io/projected/d84948ad-d0e9-4d86-97a7-1a0d9e13d858-kube-api-access-nslv2\") pod \"ironic-operator-controller-manager-67cb4dc6d4-vk52w\" (UID: \"d84948ad-d0e9-4d86-97a7-1a0d9e13d858\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.638572 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-547zq\" (UniqueName: \"kubernetes.io/projected/fcca3fad-5da8-4242-894e-9dd5917f3828-kube-api-access-547zq\") pod \"manila-operator-controller-manager-5d499bf58b-n8tkz\" (UID: \"fcca3fad-5da8-4242-894e-9dd5917f3828\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.654276 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfrfj\" (UniqueName: \"kubernetes.io/projected/d71ecb02-382d-4fde-b349-343c97f769fd-kube-api-access-bfrfj\") pod \"horizon-operator-controller-manager-5d494799bf-mljjd\" (UID: \"d71ecb02-382d-4fde-b349-343c97f769fd\") " pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.669009 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcgq4\" (UniqueName: \"kubernetes.io/projected/8d102669-be66-4bc6-8328-3e7d8a66f4c1-kube-api-access-qcgq4\") pod \"keystone-operator-controller-manager-7b4567c7cf-ndnf7\" (UID: \"8d102669-be66-4bc6-8328-3e7d8a66f4c1\") " pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.670708 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.671901 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.673169 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.677947 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-nzcwb" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.684453 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.693277 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nslv2\" (UniqueName: \"kubernetes.io/projected/d84948ad-d0e9-4d86-97a7-1a0d9e13d858-kube-api-access-nslv2\") pod \"ironic-operator-controller-manager-67cb4dc6d4-vk52w\" (UID: \"d84948ad-d0e9-4d86-97a7-1a0d9e13d858\") " pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.695195 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.696591 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.700501 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-cxzz7" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.709681 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.719456 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-547zq\" (UniqueName: \"kubernetes.io/projected/fcca3fad-5da8-4242-894e-9dd5917f3828-kube-api-access-547zq\") pod \"manila-operator-controller-manager-5d499bf58b-n8tkz\" (UID: \"fcca3fad-5da8-4242-894e-9dd5917f3828\") " pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.730633 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.732101 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.734079 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tlz55" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.734918 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.743770 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flj4x\" (UniqueName: \"kubernetes.io/projected/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-kube-api-access-flj4x\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.743851 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xqvj\" (UniqueName: \"kubernetes.io/projected/51be52bb-362c-4b52-9962-a5e6b3e9dddb-kube-api-access-2xqvj\") pod \"octavia-operator-controller-manager-64cdc6ff96-s9mbz\" (UID: \"51be52bb-362c-4b52-9962-a5e6b3e9dddb\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.743915 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpjp7\" (UniqueName: \"kubernetes.io/projected/7e515c4b-ebc1-42fc-a3b9-406552e7f797-kube-api-access-rpjp7\") pod \"neutron-operator-controller-manager-6fdcddb789-rbpf5\" (UID: \"7e515c4b-ebc1-42fc-a3b9-406552e7f797\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.743963 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbxg8\" (UniqueName: \"kubernetes.io/projected/5c7bfa9c-0c31-4ece-915a-c4e4d37fadad-kube-api-access-fbxg8\") pod \"nova-operator-controller-manager-79556f57fc-5qjxh\" (UID: \"5c7bfa9c-0c31-4ece-915a-c4e4d37fadad\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.744040 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.744080 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86bn\" (UniqueName: \"kubernetes.io/projected/e1127f2e-e8b5-4002-9f8b-7f3a286640ba-kube-api-access-z86bn\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-28gfw\" (UID: \"e1127f2e-e8b5-4002-9f8b-7f3a286640ba\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.763336 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.785150 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.786627 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.791079 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-fckfg" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.791398 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.798125 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.810400 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.811747 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.814676 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-jv4jk" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.839410 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.843559 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.845599 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.846725 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xj95\" (UniqueName: \"kubernetes.io/projected/16e51188-65ad-4a0d-a571-5f02e38d68b6-kube-api-access-2xj95\") pod \"ovn-operator-controller-manager-56897c768d-chvvt\" (UID: \"16e51188-65ad-4a0d-a571-5f02e38d68b6\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.846802 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z86bn\" (UniqueName: \"kubernetes.io/projected/e1127f2e-e8b5-4002-9f8b-7f3a286640ba-kube-api-access-z86bn\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-28gfw\" (UID: \"e1127f2e-e8b5-4002-9f8b-7f3a286640ba\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.846861 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xktk8\" (UniqueName: \"kubernetes.io/projected/ba47d9b1-160d-40da-a691-db4b4e2557d5-kube-api-access-xktk8\") pod \"placement-operator-controller-manager-57988cc5b5-j27xp\" (UID: \"ba47d9b1-160d-40da-a691-db4b4e2557d5\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.846912 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flj4x\" (UniqueName: \"kubernetes.io/projected/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-kube-api-access-flj4x\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.846953 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xqvj\" (UniqueName: \"kubernetes.io/projected/51be52bb-362c-4b52-9962-a5e6b3e9dddb-kube-api-access-2xqvj\") pod \"octavia-operator-controller-manager-64cdc6ff96-s9mbz\" (UID: \"51be52bb-362c-4b52-9962-a5e6b3e9dddb\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.847529 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpjp7\" (UniqueName: \"kubernetes.io/projected/7e515c4b-ebc1-42fc-a3b9-406552e7f797-kube-api-access-rpjp7\") pod \"neutron-operator-controller-manager-6fdcddb789-rbpf5\" (UID: \"7e515c4b-ebc1-42fc-a3b9-406552e7f797\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.847639 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbxg8\" (UniqueName: \"kubernetes.io/projected/5c7bfa9c-0c31-4ece-915a-c4e4d37fadad-kube-api-access-fbxg8\") pod \"nova-operator-controller-manager-79556f57fc-5qjxh\" (UID: \"5c7bfa9c-0c31-4ece-915a-c4e4d37fadad\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.866391 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-g7nr8" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.872638 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.879728 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.879911 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.882706 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg"] Nov 26 13:40:50 crc kubenswrapper[4695]: E1126 13:40:50.885683 4695 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:50 crc kubenswrapper[4695]: E1126 13:40:50.885876 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert podName:c66a73b5-1103-497f-87a6-70d964111fc9 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:51.885850027 +0000 UTC m=+1035.521675109 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert") pod "infra-operator-controller-manager-57548d458d-v2jnr" (UID: "c66a73b5-1103-497f-87a6-70d964111fc9") : secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.894229 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xqvj\" (UniqueName: \"kubernetes.io/projected/51be52bb-362c-4b52-9962-a5e6b3e9dddb-kube-api-access-2xqvj\") pod \"octavia-operator-controller-manager-64cdc6ff96-s9mbz\" (UID: \"51be52bb-362c-4b52-9962-a5e6b3e9dddb\") " pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.896106 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flj4x\" (UniqueName: \"kubernetes.io/projected/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-kube-api-access-flj4x\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.907614 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbxg8\" (UniqueName: \"kubernetes.io/projected/5c7bfa9c-0c31-4ece-915a-c4e4d37fadad-kube-api-access-fbxg8\") pod \"nova-operator-controller-manager-79556f57fc-5qjxh\" (UID: \"5c7bfa9c-0c31-4ece-915a-c4e4d37fadad\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.908909 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpjp7\" (UniqueName: \"kubernetes.io/projected/7e515c4b-ebc1-42fc-a3b9-406552e7f797-kube-api-access-rpjp7\") pod \"neutron-operator-controller-manager-6fdcddb789-rbpf5\" (UID: \"7e515c4b-ebc1-42fc-a3b9-406552e7f797\") " pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.912931 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z86bn\" (UniqueName: \"kubernetes.io/projected/e1127f2e-e8b5-4002-9f8b-7f3a286640ba-kube-api-access-z86bn\") pod \"mariadb-operator-controller-manager-66f4dd4bc7-28gfw\" (UID: \"e1127f2e-e8b5-4002-9f8b-7f3a286640ba\") " pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" Nov 26 13:40:50 crc kubenswrapper[4695]: E1126 13:40:50.921606 4695 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:50 crc kubenswrapper[4695]: E1126 13:40:50.921893 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert podName:071300a7-9f99-4e3f-8fd7-ceabb7ba738d nodeName:}" failed. No retries permitted until 2025-11-26 13:40:51.421736793 +0000 UTC m=+1035.057561875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" (UID: "071300a7-9f99-4e3f-8fd7-ceabb7ba738d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.938187 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.940002 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.940115 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.948184 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-x9g55" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.967072 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.975144 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.981893 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.987448 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584"] Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.989116 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" Nov 26 13:40:50 crc kubenswrapper[4695]: I1126 13:40:50.993114 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-4gxlr" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.008421 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.024394 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xktk8\" (UniqueName: \"kubernetes.io/projected/ba47d9b1-160d-40da-a691-db4b4e2557d5-kube-api-access-xktk8\") pod \"placement-operator-controller-manager-57988cc5b5-j27xp\" (UID: \"ba47d9b1-160d-40da-a691-db4b4e2557d5\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.024579 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gckm9\" (UniqueName: \"kubernetes.io/projected/b044c065-4ba3-4390-88c9-340e2fc1ba2f-kube-api-access-gckm9\") pod \"swift-operator-controller-manager-d77b94747-t7sqd\" (UID: \"b044c065-4ba3-4390-88c9-340e2fc1ba2f\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.024821 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszb7\" (UniqueName: \"kubernetes.io/projected/92f90071-d080-4579-9a87-aef8e8b760d3-kube-api-access-hszb7\") pod \"telemetry-operator-controller-manager-76cc84c6bb-vdrzg\" (UID: \"92f90071-d080-4579-9a87-aef8e8b760d3\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.024888 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xj95\" (UniqueName: \"kubernetes.io/projected/16e51188-65ad-4a0d-a571-5f02e38d68b6-kube-api-access-2xj95\") pod \"ovn-operator-controller-manager-56897c768d-chvvt\" (UID: \"16e51188-65ad-4a0d-a571-5f02e38d68b6\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.042951 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.051684 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.056631 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xj95\" (UniqueName: \"kubernetes.io/projected/16e51188-65ad-4a0d-a571-5f02e38d68b6-kube-api-access-2xj95\") pod \"ovn-operator-controller-manager-56897c768d-chvvt\" (UID: \"16e51188-65ad-4a0d-a571-5f02e38d68b6\") " pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.057534 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sgrcg" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.068315 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.068729 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.085402 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xktk8\" (UniqueName: \"kubernetes.io/projected/ba47d9b1-160d-40da-a691-db4b4e2557d5-kube-api-access-xktk8\") pod \"placement-operator-controller-manager-57988cc5b5-j27xp\" (UID: \"ba47d9b1-160d-40da-a691-db4b4e2557d5\") " pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.101508 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.112734 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.116111 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.116760 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-l57rt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.116977 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.126307 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gckm9\" (UniqueName: \"kubernetes.io/projected/b044c065-4ba3-4390-88c9-340e2fc1ba2f-kube-api-access-gckm9\") pod \"swift-operator-controller-manager-d77b94747-t7sqd\" (UID: \"b044c065-4ba3-4390-88c9-340e2fc1ba2f\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.131408 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htbmp\" (UniqueName: \"kubernetes.io/projected/9716d6f2-3c85-4d5d-a261-966d0e6d6dfc-kube-api-access-htbmp\") pod \"watcher-operator-controller-manager-656dcb59d4-jsmdt\" (UID: \"9716d6f2-3c85-4d5d-a261-966d0e6d6dfc\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.133280 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4v5\" (UniqueName: \"kubernetes.io/projected/b87cee59-5442-46a0-b5d2-8467196ceedf-kube-api-access-ll4v5\") pod \"test-operator-controller-manager-5cd6c7f4c8-z6584\" (UID: \"b87cee59-5442-46a0-b5d2-8467196ceedf\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.165549 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hszb7\" (UniqueName: \"kubernetes.io/projected/92f90071-d080-4579-9a87-aef8e8b760d3-kube-api-access-hszb7\") pod \"telemetry-operator-controller-manager-76cc84c6bb-vdrzg\" (UID: \"92f90071-d080-4579-9a87-aef8e8b760d3\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.165420 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gckm9\" (UniqueName: \"kubernetes.io/projected/b044c065-4ba3-4390-88c9-340e2fc1ba2f-kube-api-access-gckm9\") pod \"swift-operator-controller-manager-d77b94747-t7sqd\" (UID: \"b044c065-4ba3-4390-88c9-340e2fc1ba2f\") " pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.126930 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.177257 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.190736 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.205896 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hszb7\" (UniqueName: \"kubernetes.io/projected/92f90071-d080-4579-9a87-aef8e8b760d3-kube-api-access-hszb7\") pod \"telemetry-operator-controller-manager-76cc84c6bb-vdrzg\" (UID: \"92f90071-d080-4579-9a87-aef8e8b760d3\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.229489 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.230717 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.230842 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.234901 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-zsw86" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.263289 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.267280 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlvmn\" (UniqueName: \"kubernetes.io/projected/375f27a9-421e-422e-baee-6d5ac575788a-kube-api-access-zlvmn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z2pb5\" (UID: \"375f27a9-421e-422e-baee-6d5ac575788a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.267409 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htbmp\" (UniqueName: \"kubernetes.io/projected/9716d6f2-3c85-4d5d-a261-966d0e6d6dfc-kube-api-access-htbmp\") pod \"watcher-operator-controller-manager-656dcb59d4-jsmdt\" (UID: \"9716d6f2-3c85-4d5d-a261-966d0e6d6dfc\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.267450 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4v5\" (UniqueName: \"kubernetes.io/projected/b87cee59-5442-46a0-b5d2-8467196ceedf-kube-api-access-ll4v5\") pod \"test-operator-controller-manager-5cd6c7f4c8-z6584\" (UID: \"b87cee59-5442-46a0-b5d2-8467196ceedf\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.267472 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4mdh\" (UniqueName: \"kubernetes.io/projected/4c653020-2777-48e3-b06f-b33a61aabc36-kube-api-access-q4mdh\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.267513 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.267556 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.292056 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4v5\" (UniqueName: \"kubernetes.io/projected/b87cee59-5442-46a0-b5d2-8467196ceedf-kube-api-access-ll4v5\") pod \"test-operator-controller-manager-5cd6c7f4c8-z6584\" (UID: \"b87cee59-5442-46a0-b5d2-8467196ceedf\") " pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.292427 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htbmp\" (UniqueName: \"kubernetes.io/projected/9716d6f2-3c85-4d5d-a261-966d0e6d6dfc-kube-api-access-htbmp\") pod \"watcher-operator-controller-manager-656dcb59d4-jsmdt\" (UID: \"9716d6f2-3c85-4d5d-a261-966d0e6d6dfc\") " pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.312864 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" Nov 26 13:40:51 crc kubenswrapper[4695]: W1126 13:40:51.320842 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf58d06_6729_4a3e_8682_641649f1ecd2.slice/crio-33ffa37019abe9914473270e4ea8ee8fec0f878059609bc79eb7a66bb07b70e2 WatchSource:0}: Error finding container 33ffa37019abe9914473270e4ea8ee8fec0f878059609bc79eb7a66bb07b70e2: Status 404 returned error can't find the container with id 33ffa37019abe9914473270e4ea8ee8fec0f878059609bc79eb7a66bb07b70e2 Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.328127 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-955677c94-twcbz"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.335772 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.366829 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" event={"ID":"868435aa-9f77-46df-af13-ae24b16dee14","Type":"ContainerStarted","Data":"763655145fe49098dd735a99273c0ba95d73567bb3424049fadfbd172bd596d7"} Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.369068 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.369148 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.369258 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlvmn\" (UniqueName: \"kubernetes.io/projected/375f27a9-421e-422e-baee-6d5ac575788a-kube-api-access-zlvmn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z2pb5\" (UID: \"375f27a9-421e-422e-baee-6d5ac575788a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.369330 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4mdh\" (UniqueName: \"kubernetes.io/projected/4c653020-2777-48e3-b06f-b33a61aabc36-kube-api-access-q4mdh\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.369624 4695 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.369726 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:51.869699331 +0000 UTC m=+1035.505524603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "metrics-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.369802 4695 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.369834 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:51.869824775 +0000 UTC m=+1035.505650077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "webhook-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.371273 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.392846 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlvmn\" (UniqueName: \"kubernetes.io/projected/375f27a9-421e-422e-baee-6d5ac575788a-kube-api-access-zlvmn\") pod \"rabbitmq-cluster-operator-manager-668c99d594-z2pb5\" (UID: \"375f27a9-421e-422e-baee-6d5ac575788a\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.394940 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4mdh\" (UniqueName: \"kubernetes.io/projected/4c653020-2777-48e3-b06f-b33a61aabc36-kube-api-access-q4mdh\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.400377 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.420294 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.421782 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.422200 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.468849 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.470600 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.471284 4695 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.471341 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert podName:071300a7-9f99-4e3f-8fd7-ceabb7ba738d nodeName:}" failed. No retries permitted until 2025-11-26 13:40:52.471322629 +0000 UTC m=+1036.107147711 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" (UID: "071300a7-9f99-4e3f-8fd7-ceabb7ba738d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.481503 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784"] Nov 26 13:40:51 crc kubenswrapper[4695]: W1126 13:40:51.521991 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490f30b9_4b79_4a35_a77d_44c8a90b5dcf.slice/crio-577136af9184d02d2063e9e58c2d49cca565dd4a263fc817242ecf3b39293017 WatchSource:0}: Error finding container 577136af9184d02d2063e9e58c2d49cca565dd4a263fc817242ecf3b39293017: Status 404 returned error can't find the container with id 577136af9184d02d2063e9e58c2d49cca565dd4a263fc817242ecf3b39293017 Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.691836 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.746242 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.795432 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w"] Nov 26 13:40:51 crc kubenswrapper[4695]: W1126 13:40:51.820395 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd84948ad_d0e9_4d86_97a7_1a0d9e13d858.slice/crio-b63fc1ebdd8782fd8453d1bad2b7ac048c01a774290350df1b3803adcc478d42 WatchSource:0}: Error finding container b63fc1ebdd8782fd8453d1bad2b7ac048c01a774290350df1b3803adcc478d42: Status 404 returned error can't find the container with id b63fc1ebdd8782fd8453d1bad2b7ac048c01a774290350df1b3803adcc478d42 Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.881920 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.882006 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.882235 4695 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.882317 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:52.882292586 +0000 UTC m=+1036.518117668 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "metrics-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.882836 4695 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.882934 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:52.882920705 +0000 UTC m=+1036.518745787 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "webhook-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.914407 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.963317 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5"] Nov 26 13:40:51 crc kubenswrapper[4695]: I1126 13:40:51.983371 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.983684 4695 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:51 crc kubenswrapper[4695]: E1126 13:40:51.983747 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert podName:c66a73b5-1103-497f-87a6-70d964111fc9 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:53.983728678 +0000 UTC m=+1037.619553760 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert") pod "infra-operator-controller-manager-57548d458d-v2jnr" (UID: "c66a73b5-1103-497f-87a6-70d964111fc9") : secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.005832 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.011848 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd"] Nov 26 13:40:52 crc kubenswrapper[4695]: W1126 13:40:52.014429 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e515c4b_ebc1_42fc_a3b9_406552e7f797.slice/crio-3e5e2f5743edb1d696f59b13f03b550a780f2c3416bab7f58cee71c71e1b0289 WatchSource:0}: Error finding container 3e5e2f5743edb1d696f59b13f03b550a780f2c3416bab7f58cee71c71e1b0289: Status 404 returned error can't find the container with id 3e5e2f5743edb1d696f59b13f03b550a780f2c3416bab7f58cee71c71e1b0289 Nov 26 13:40:52 crc kubenswrapper[4695]: W1126 13:40:52.067318 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71ecb02_382d_4fde_b349_343c97f769fd.slice/crio-1d3de58a01a15d16194fbe7d86dde43a8947970af1264223386677e7e670ce2e WatchSource:0}: Error finding container 1d3de58a01a15d16194fbe7d86dde43a8947970af1264223386677e7e670ce2e: Status 404 returned error can't find the container with id 1d3de58a01a15d16194fbe7d86dde43a8947970af1264223386677e7e670ce2e Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.140190 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.236847 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.249121 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.262608 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.268085 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.279852 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xj95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-chvvt_openstack-operators(16e51188-65ad-4a0d-a571-5f02e38d68b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.291174 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2xj95,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-56897c768d-chvvt_openstack-operators(16e51188-65ad-4a0d-a571-5f02e38d68b6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.292807 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" podUID="16e51188-65ad-4a0d-a571-5f02e38d68b6" Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.380972 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" event={"ID":"7e515c4b-ebc1-42fc-a3b9-406552e7f797","Type":"ContainerStarted","Data":"3e5e2f5743edb1d696f59b13f03b550a780f2c3416bab7f58cee71c71e1b0289"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.382911 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" event={"ID":"e1127f2e-e8b5-4002-9f8b-7f3a286640ba","Type":"ContainerStarted","Data":"f07a2ff1dd9217d4f4a24662ac3a1e822277e0fb01a1d76d79159e6e50c8de84"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.384869 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" event={"ID":"51be52bb-362c-4b52-9962-a5e6b3e9dddb","Type":"ContainerStarted","Data":"d0c954f17979d927570963d65dc7c52591b721bb2ae10e03e2a2ed35737d03d5"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.387569 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" event={"ID":"dbf58d06-6729-4a3e-8682-641649f1ecd2","Type":"ContainerStarted","Data":"33ffa37019abe9914473270e4ea8ee8fec0f878059609bc79eb7a66bb07b70e2"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.389627 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" event={"ID":"5c7bfa9c-0c31-4ece-915a-c4e4d37fadad","Type":"ContainerStarted","Data":"4c84bc722096c07e0af4438247d594be4220a8e9cc3ac26b23326d119d073c5b"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.393574 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" event={"ID":"fcca3fad-5da8-4242-894e-9dd5917f3828","Type":"ContainerStarted","Data":"dccf65e525ef7db90be0d9c892ff748d0e853ca09ee73164688392ee47ccc6a7"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.398024 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" event={"ID":"16e51188-65ad-4a0d-a571-5f02e38d68b6","Type":"ContainerStarted","Data":"dd2d0ecf132fba53e4370d305480aea5313a9276fa77bb7123b976f95fe162e9"} Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.402181 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" podUID="16e51188-65ad-4a0d-a571-5f02e38d68b6" Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.403053 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" event={"ID":"ba47d9b1-160d-40da-a691-db4b4e2557d5","Type":"ContainerStarted","Data":"7108a77542b7d014b46bf80631944888f117b9f8d681ea7db535bc6fb19f9734"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.403914 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" event={"ID":"d71ecb02-382d-4fde-b349-343c97f769fd","Type":"ContainerStarted","Data":"1d3de58a01a15d16194fbe7d86dde43a8947970af1264223386677e7e670ce2e"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.408365 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" event={"ID":"490f30b9-4b79-4a35-a77d-44c8a90b5dcf","Type":"ContainerStarted","Data":"577136af9184d02d2063e9e58c2d49cca565dd4a263fc817242ecf3b39293017"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.410471 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" event={"ID":"8d102669-be66-4bc6-8328-3e7d8a66f4c1","Type":"ContainerStarted","Data":"e51311a54fd0b3ffd6aa82ccf01d80d98c4817c59a0bf15d5094f12641d08258"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.425676 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" event={"ID":"f76ff94a-b97d-4bbc-bc03-3b8df6d35095","Type":"ContainerStarted","Data":"851bfce02c1726edf9979e05061ad0173d3ab533afc46fef69df76e0def7e6c9"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.432366 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" event={"ID":"a487eafc-c65d-4ce9-b801-e489882a4dfa","Type":"ContainerStarted","Data":"0b4e8be4f395b87f51a36dacaab7d60ee74422467405f967c59d7a4d3fc08fbf"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.435361 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" event={"ID":"d84948ad-d0e9-4d86-97a7-1a0d9e13d858","Type":"ContainerStarted","Data":"b63fc1ebdd8782fd8453d1bad2b7ac048c01a774290350df1b3803adcc478d42"} Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.459442 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.487476 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.497147 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.497930 4695 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.497980 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert podName:071300a7-9f99-4e3f-8fd7-ceabb7ba738d nodeName:}" failed. No retries permitted until 2025-11-26 13:40:54.497964695 +0000 UTC m=+1038.133789767 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" (UID: "071300a7-9f99-4e3f-8fd7-ceabb7ba738d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.513504 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt"] Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.525680 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd"] Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.529151 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ll4v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-z6584_openstack-operators(b87cee59-5442-46a0-b5d2-8467196ceedf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.531270 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ll4v5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5cd6c7f4c8-z6584_openstack-operators(b87cee59-5442-46a0-b5d2-8467196ceedf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.532432 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" podUID="b87cee59-5442-46a0-b5d2-8467196ceedf" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.532730 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-htbmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-jsmdt_openstack-operators(9716d6f2-3c85-4d5d-a261-966d0e6d6dfc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.536034 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-htbmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-656dcb59d4-jsmdt_openstack-operators(9716d6f2-3c85-4d5d-a261-966d0e6d6dfc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.538532 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" podUID="9716d6f2-3c85-4d5d-a261-966d0e6d6dfc" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.542829 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:72236301580ff9080f7e311b832d7ba66666a9afeda51f969745229624ff26e4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gckm9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-d77b94747-t7sqd_openstack-operators(b044c065-4ba3-4390-88c9-340e2fc1ba2f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.601990 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg"] Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.606733 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hszb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-vdrzg_openstack-operators(92f90071-d080-4579-9a87-aef8e8b760d3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.609497 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hszb7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-vdrzg_openstack-operators(92f90071-d080-4579-9a87-aef8e8b760d3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.610649 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" podUID="92f90071-d080-4579-9a87-aef8e8b760d3" Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.905202 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:52 crc kubenswrapper[4695]: I1126 13:40:52.905426 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.905637 4695 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.905744 4695 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.905807 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:54.905784531 +0000 UTC m=+1038.541609613 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "metrics-server-cert" not found Nov 26 13:40:52 crc kubenswrapper[4695]: E1126 13:40:52.905973 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:54.905826732 +0000 UTC m=+1038.541651964 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "webhook-server-cert" not found Nov 26 13:40:53 crc kubenswrapper[4695]: I1126 13:40:53.452270 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" event={"ID":"b044c065-4ba3-4390-88c9-340e2fc1ba2f","Type":"ContainerStarted","Data":"9e2978180b373eb83e02170e54e1635f2e7dc92b584a94ec861b0e2be554b4b3"} Nov 26 13:40:53 crc kubenswrapper[4695]: I1126 13:40:53.457137 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" event={"ID":"375f27a9-421e-422e-baee-6d5ac575788a","Type":"ContainerStarted","Data":"102a53e03c0359c35df920cd76eb93436ee906372f88fd8e1b7c1dca156ac233"} Nov 26 13:40:53 crc kubenswrapper[4695]: I1126 13:40:53.463776 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" event={"ID":"b87cee59-5442-46a0-b5d2-8467196ceedf","Type":"ContainerStarted","Data":"a1ba26da4a44db92a6f1383e665fe90e08b4910a5941ce8b85307c825d7045bf"} Nov 26 13:40:53 crc kubenswrapper[4695]: I1126 13:40:53.466706 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" event={"ID":"9716d6f2-3c85-4d5d-a261-966d0e6d6dfc","Type":"ContainerStarted","Data":"b230606c6bee2c3dd398832c8c0d5ddf14502f3d3f69b203d684872e081beded"} Nov 26 13:40:53 crc kubenswrapper[4695]: E1126 13:40:53.469711 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" podUID="b87cee59-5442-46a0-b5d2-8467196ceedf" Nov 26 13:40:53 crc kubenswrapper[4695]: E1126 13:40:53.470316 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" podUID="9716d6f2-3c85-4d5d-a261-966d0e6d6dfc" Nov 26 13:40:53 crc kubenswrapper[4695]: I1126 13:40:53.472268 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" event={"ID":"92f90071-d080-4579-9a87-aef8e8b760d3","Type":"ContainerStarted","Data":"180349c3cd086be8698a8adbffb9124c65a36e21a73d0b5568846bbae9065d3e"} Nov 26 13:40:53 crc kubenswrapper[4695]: E1126 13:40:53.475834 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bbb543d2d67c73e5df5d6357c3251363eb34a99575c5bf10416edd45dbdae2f6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" podUID="16e51188-65ad-4a0d-a571-5f02e38d68b6" Nov 26 13:40:53 crc kubenswrapper[4695]: E1126 13:40:53.507045 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" podUID="92f90071-d080-4579-9a87-aef8e8b760d3" Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.026452 4695 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.026572 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert podName:c66a73b5-1103-497f-87a6-70d964111fc9 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:58.026546567 +0000 UTC m=+1041.662371649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert") pod "infra-operator-controller-manager-57548d458d-v2jnr" (UID: "c66a73b5-1103-497f-87a6-70d964111fc9") : secret "infra-operator-webhook-server-cert" not found Nov 26 13:40:54 crc kubenswrapper[4695]: I1126 13:40:54.026232 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.490848 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:210517b918e30df1c95fc7d961c8e57e9a9d1cc2b9fe7eb4dad2034dd53a90aa\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" podUID="b87cee59-5442-46a0-b5d2-8467196ceedf" Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.491415 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" podUID="92f90071-d080-4579-9a87-aef8e8b760d3" Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.494092 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6bed55b172b9ee8ccc3952cbfc543d8bd44e2690f6db94348a754152fd78f4cf\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" podUID="9716d6f2-3c85-4d5d-a261-966d0e6d6dfc" Nov 26 13:40:54 crc kubenswrapper[4695]: I1126 13:40:54.536166 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.536451 4695 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.536584 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert podName:071300a7-9f99-4e3f-8fd7-ceabb7ba738d nodeName:}" failed. No retries permitted until 2025-11-26 13:40:58.53655549 +0000 UTC m=+1042.172380572 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" (UID: "071300a7-9f99-4e3f-8fd7-ceabb7ba738d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:54 crc kubenswrapper[4695]: I1126 13:40:54.943656 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:54 crc kubenswrapper[4695]: I1126 13:40:54.943734 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.943950 4695 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.943946 4695 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.944035 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:58.944014985 +0000 UTC m=+1042.579840067 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "metrics-server-cert" not found Nov 26 13:40:54 crc kubenswrapper[4695]: E1126 13:40:54.944077 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:40:58.944048676 +0000 UTC m=+1042.579873918 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "webhook-server-cert" not found Nov 26 13:40:58 crc kubenswrapper[4695]: I1126 13:40:58.106185 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:58 crc kubenswrapper[4695]: I1126 13:40:58.118626 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c66a73b5-1103-497f-87a6-70d964111fc9-cert\") pod \"infra-operator-controller-manager-57548d458d-v2jnr\" (UID: \"c66a73b5-1103-497f-87a6-70d964111fc9\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:58 crc kubenswrapper[4695]: I1126 13:40:58.266421 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:40:58 crc kubenswrapper[4695]: I1126 13:40:58.616568 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:40:58 crc kubenswrapper[4695]: E1126 13:40:58.616991 4695 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:58 crc kubenswrapper[4695]: E1126 13:40:58.617073 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert podName:071300a7-9f99-4e3f-8fd7-ceabb7ba738d nodeName:}" failed. No retries permitted until 2025-11-26 13:41:06.617048185 +0000 UTC m=+1050.252873267 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" (UID: "071300a7-9f99-4e3f-8fd7-ceabb7ba738d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:40:59 crc kubenswrapper[4695]: I1126 13:40:59.024623 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:59 crc kubenswrapper[4695]: I1126 13:40:59.024699 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:40:59 crc kubenswrapper[4695]: E1126 13:40:59.024865 4695 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 26 13:40:59 crc kubenswrapper[4695]: E1126 13:40:59.024973 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:41:07.024949374 +0000 UTC m=+1050.660774456 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "webhook-server-cert" not found Nov 26 13:40:59 crc kubenswrapper[4695]: E1126 13:40:59.024887 4695 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 26 13:40:59 crc kubenswrapper[4695]: E1126 13:40:59.025050 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs podName:4c653020-2777-48e3-b06f-b33a61aabc36 nodeName:}" failed. No retries permitted until 2025-11-26 13:41:07.025031767 +0000 UTC m=+1050.660856839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs") pod "openstack-operator-controller-manager-6bcd57bc9d-w2r7r" (UID: "4c653020-2777-48e3-b06f-b33a61aabc36") : secret "metrics-server-cert" not found Nov 26 13:41:05 crc kubenswrapper[4695]: E1126 13:41:05.444606 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711" Nov 26 13:41:05 crc kubenswrapper[4695]: E1126 13:41:05.445419 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:25faa5b0e4801d4d3b01a28b877ed3188eee71f33ad66f3c2e86b7921758e711,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qcgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7b4567c7cf-ndnf7_openstack-operators(8d102669-be66-4bc6-8328-3e7d8a66f4c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:41:06 crc kubenswrapper[4695]: I1126 13:41:06.581796 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr"] Nov 26 13:41:06 crc kubenswrapper[4695]: I1126 13:41:06.677483 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:41:06 crc kubenswrapper[4695]: E1126 13:41:06.677819 4695 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:41:06 crc kubenswrapper[4695]: E1126 13:41:06.677888 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert podName:071300a7-9f99-4e3f-8fd7-ceabb7ba738d nodeName:}" failed. No retries permitted until 2025-11-26 13:41:22.677869054 +0000 UTC m=+1066.313694136 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert") pod "openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" (UID: "071300a7-9f99-4e3f-8fd7-ceabb7ba738d") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.088152 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.088246 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.097858 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-metrics-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.098186 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4c653020-2777-48e3-b06f-b33a61aabc36-webhook-certs\") pod \"openstack-operator-controller-manager-6bcd57bc9d-w2r7r\" (UID: \"4c653020-2777-48e3-b06f-b33a61aabc36\") " pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.338502 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.606378 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" event={"ID":"490f30b9-4b79-4a35-a77d-44c8a90b5dcf","Type":"ContainerStarted","Data":"7c56d129d48112737b2c7ab4fb9f20788188139036e6b702f36c856a7f6bff84"} Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.608110 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" event={"ID":"c66a73b5-1103-497f-87a6-70d964111fc9","Type":"ContainerStarted","Data":"c2b62f1f8157b8d364929761fe7028a6137792afef65fffc255bcabca0422580"} Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.610530 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" event={"ID":"a487eafc-c65d-4ce9-b801-e489882a4dfa","Type":"ContainerStarted","Data":"b7c407facd8a8a9a4a5db31edfbd23fdfa571ba9fc9b9e7989fa777acd700f91"} Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.613181 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" event={"ID":"d71ecb02-382d-4fde-b349-343c97f769fd","Type":"ContainerStarted","Data":"a5ab4a419f777db55f55ee0bfe00dc08f4951a73cbfdb976c43de31d85c0cfb9"} Nov 26 13:41:07 crc kubenswrapper[4695]: I1126 13:41:07.620642 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" event={"ID":"375f27a9-421e-422e-baee-6d5ac575788a","Type":"ContainerStarted","Data":"383ac29f99242b24028d52bd465e97e77830adb584254f5415bab8c15912a588"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.640319 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" event={"ID":"5c7bfa9c-0c31-4ece-915a-c4e4d37fadad","Type":"ContainerStarted","Data":"ab7867dbedb9320cede9b7fb29b501817695fb165b263162a26d73f0bbaf00a9"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.654826 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" event={"ID":"51be52bb-362c-4b52-9962-a5e6b3e9dddb","Type":"ContainerStarted","Data":"eeb73164743a0e04d9f05f349073747d3cc4ec11471c6051b0309ac9c05f1635"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.672422 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" event={"ID":"fcca3fad-5da8-4242-894e-9dd5917f3828","Type":"ContainerStarted","Data":"da47c7237a804312f8a52b1820a8b5b8c7d64ced9649feee8a51b03f6eacdbd9"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.698377 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" event={"ID":"d84948ad-d0e9-4d86-97a7-1a0d9e13d858","Type":"ContainerStarted","Data":"b1da2c311cd6a325441632d7de206013eee352c61b1184669631c0bf117ef078"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.701732 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" event={"ID":"ba47d9b1-160d-40da-a691-db4b4e2557d5","Type":"ContainerStarted","Data":"14a7486b7e1d5fb932e94fcd8a31fad63f784c77be7f140bc7490e73f04cf6d6"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.703551 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" event={"ID":"dbf58d06-6729-4a3e-8682-641649f1ecd2","Type":"ContainerStarted","Data":"7c5a1556c5bc129f90e028a84010473f9bde3b7519e660c1555cb755d23a42ad"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.704558 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" event={"ID":"868435aa-9f77-46df-af13-ae24b16dee14","Type":"ContainerStarted","Data":"0400a24174301ca5f19951cbd2225e89cd485ed2ac3b44e84c7552e537a1a62e"} Nov 26 13:41:09 crc kubenswrapper[4695]: I1126 13:41:09.713058 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" event={"ID":"e1127f2e-e8b5-4002-9f8b-7f3a286640ba","Type":"ContainerStarted","Data":"26b0b272d6734d1da8560f773c195de4b897b54109855a4e5ddfdb088b0e4e8a"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.443832 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-z2pb5" podStartSLOduration=7.174123996 podStartE2EDuration="21.443811177s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.495685623 +0000 UTC m=+1036.131510705" lastFinishedPulling="2025-11-26 13:41:06.765372804 +0000 UTC m=+1050.401197886" observedRunningTime="2025-11-26 13:41:07.646637836 +0000 UTC m=+1051.282462918" watchObservedRunningTime="2025-11-26 13:41:11.443811177 +0000 UTC m=+1055.079636259" Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.460077 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r"] Nov 26 13:41:11 crc kubenswrapper[4695]: W1126 13:41:11.472183 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c653020_2777_48e3_b06f_b33a61aabc36.slice/crio-bc7a3722560e0927eeb47ad2c3fffe2314a099d8996972a252e2c272c5f7d422 WatchSource:0}: Error finding container bc7a3722560e0927eeb47ad2c3fffe2314a099d8996972a252e2c272c5f7d422: Status 404 returned error can't find the container with id bc7a3722560e0927eeb47ad2c3fffe2314a099d8996972a252e2c272c5f7d422 Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.761988 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" event={"ID":"7e515c4b-ebc1-42fc-a3b9-406552e7f797","Type":"ContainerStarted","Data":"649bf8ff179745cd5a953513e0b7abcf7eda2f45dc98ad5a8efa06613c69cfea"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.768381 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" event={"ID":"f76ff94a-b97d-4bbc-bc03-3b8df6d35095","Type":"ContainerStarted","Data":"6383fce0daaef726e1d1d7b8afca195f8f4120c42e06cf7a63ddeca917a49057"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.772141 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" event={"ID":"4c653020-2777-48e3-b06f-b33a61aabc36","Type":"ContainerStarted","Data":"bc7a3722560e0927eeb47ad2c3fffe2314a099d8996972a252e2c272c5f7d422"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.784532 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" event={"ID":"490f30b9-4b79-4a35-a77d-44c8a90b5dcf","Type":"ContainerStarted","Data":"40a613b1226bb88aa889edfd9e470686170716b52c142f06fbadfc1d297ec8d8"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.786142 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.790605 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.791785 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" event={"ID":"51be52bb-362c-4b52-9962-a5e6b3e9dddb","Type":"ContainerStarted","Data":"ccd28101446a13b684b3c49fa066448cdde5aa18ff8eee4e07284508fc144b14"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.792043 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.805197 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" event={"ID":"fcca3fad-5da8-4242-894e-9dd5917f3828","Type":"ContainerStarted","Data":"82b157d34f1ed14026f0e39b26052e86f04ffbee343d37706028f49981b6c039"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.806455 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.807942 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-589cbd6b5b-6t784" podStartSLOduration=2.071963893 podStartE2EDuration="21.807921649s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.550770196 +0000 UTC m=+1035.186595268" lastFinishedPulling="2025-11-26 13:41:11.286727942 +0000 UTC m=+1054.922553024" observedRunningTime="2025-11-26 13:41:11.802640192 +0000 UTC m=+1055.438465284" watchObservedRunningTime="2025-11-26 13:41:11.807921649 +0000 UTC m=+1055.443746731" Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.818690 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" event={"ID":"16e51188-65ad-4a0d-a571-5f02e38d68b6","Type":"ContainerStarted","Data":"9bef352cba64980ee92b6dedc28a1d098eee81fa80dba898ed70f49886e63cad"} Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.882877 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" podStartSLOduration=2.915900692 podStartE2EDuration="21.882856094s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.209763437 +0000 UTC m=+1035.845588519" lastFinishedPulling="2025-11-26 13:41:11.176718839 +0000 UTC m=+1054.812543921" observedRunningTime="2025-11-26 13:41:11.856735346 +0000 UTC m=+1055.492560428" watchObservedRunningTime="2025-11-26 13:41:11.882856094 +0000 UTC m=+1055.518681176" Nov 26 13:41:11 crc kubenswrapper[4695]: I1126 13:41:11.882990 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" podStartSLOduration=2.723501947 podStartE2EDuration="21.882986407s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.098584935 +0000 UTC m=+1035.734410017" lastFinishedPulling="2025-11-26 13:41:11.258069395 +0000 UTC m=+1054.893894477" observedRunningTime="2025-11-26 13:41:11.882481002 +0000 UTC m=+1055.518306084" watchObservedRunningTime="2025-11-26 13:41:11.882986407 +0000 UTC m=+1055.518811489" Nov 26 13:41:12 crc kubenswrapper[4695]: I1126 13:41:12.827300 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" event={"ID":"a487eafc-c65d-4ce9-b801-e489882a4dfa","Type":"ContainerStarted","Data":"2507c291c0fb8fb6799f8e7dd513fdc0566038bd8584a01fc3bc9dc0dd20d1af"} Nov 26 13:41:12 crc kubenswrapper[4695]: I1126 13:41:12.827812 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" Nov 26 13:41:12 crc kubenswrapper[4695]: I1126 13:41:12.831011 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" Nov 26 13:41:12 crc kubenswrapper[4695]: I1126 13:41:12.872694 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b64f4fb85-xtp7h" podStartSLOduration=4.095344005 podStartE2EDuration="23.872665232s" podCreationTimestamp="2025-11-26 13:40:49 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.511694188 +0000 UTC m=+1035.147519270" lastFinishedPulling="2025-11-26 13:41:11.289015415 +0000 UTC m=+1054.924840497" observedRunningTime="2025-11-26 13:41:12.868641164 +0000 UTC m=+1056.504466266" watchObservedRunningTime="2025-11-26 13:41:12.872665232 +0000 UTC m=+1056.508490314" Nov 26 13:41:15 crc kubenswrapper[4695]: E1126 13:41:15.765599 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" podUID="b044c065-4ba3-4390-88c9-340e2fc1ba2f" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.861261 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" event={"ID":"7e515c4b-ebc1-42fc-a3b9-406552e7f797","Type":"ContainerStarted","Data":"5c9e9d01528b0e1f6a42609eb6933fdc5e1ee6dd593dcd3c2a43c5296d9fb1df"} Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.861449 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.864169 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" event={"ID":"4c653020-2777-48e3-b06f-b33a61aabc36","Type":"ContainerStarted","Data":"978514aa2166250c9aa7db22822290b6b84bed8c476a24bffa4e7abe8c64e551"} Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.864405 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.866287 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" event={"ID":"b044c065-4ba3-4390-88c9-340e2fc1ba2f","Type":"ContainerStarted","Data":"a7f8af0c24152a9db8e0fb8254e11265cb636fa7c0da778ed1736bb4935839ff"} Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.867832 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" event={"ID":"d71ecb02-382d-4fde-b349-343c97f769fd","Type":"ContainerStarted","Data":"e4d9e59981d0c9b7e6c68a70d87bdd67347bde3fdf1175073ae65ae0b4945245"} Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.868923 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.872974 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.876036 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" event={"ID":"e1127f2e-e8b5-4002-9f8b-7f3a286640ba","Type":"ContainerStarted","Data":"5d1efaed8e7e5cb02322c44e1990a43e549fc8f596d75ec91cb240e630121525"} Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.876974 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.880560 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.893168 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" podStartSLOduration=11.361752315 podStartE2EDuration="25.893145525s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.040861578 +0000 UTC m=+1035.676686660" lastFinishedPulling="2025-11-26 13:41:06.572254788 +0000 UTC m=+1050.208079870" observedRunningTime="2025-11-26 13:41:15.891068459 +0000 UTC m=+1059.526893541" watchObservedRunningTime="2025-11-26 13:41:15.893145525 +0000 UTC m=+1059.528970607" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.942302 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-66f4dd4bc7-28gfw" podStartSLOduration=6.930207381 podStartE2EDuration="25.942275901s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.277062809 +0000 UTC m=+1035.912887891" lastFinishedPulling="2025-11-26 13:41:11.289131329 +0000 UTC m=+1054.924956411" observedRunningTime="2025-11-26 13:41:15.942051684 +0000 UTC m=+1059.577876776" watchObservedRunningTime="2025-11-26 13:41:15.942275901 +0000 UTC m=+1059.578100983" Nov 26 13:41:15 crc kubenswrapper[4695]: I1126 13:41:15.992977 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" podStartSLOduration=25.992943425 podStartE2EDuration="25.992943425s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:41:15.989483086 +0000 UTC m=+1059.625308178" watchObservedRunningTime="2025-11-26 13:41:15.992943425 +0000 UTC m=+1059.628768507" Nov 26 13:41:16 crc kubenswrapper[4695]: E1126 13:41:16.162045 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" podUID="8d102669-be66-4bc6-8328-3e7d8a66f4c1" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.902383 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" event={"ID":"868435aa-9f77-46df-af13-ae24b16dee14","Type":"ContainerStarted","Data":"2e1ee9f3aa680167d6c6f6892b67e410fa5a1b852615ec9c8f02d2a3847b14af"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.902884 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.908873 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" event={"ID":"16e51188-65ad-4a0d-a571-5f02e38d68b6","Type":"ContainerStarted","Data":"1dcbe4b59a55ba5c37c436ef8d7e0053db83569bb25d713768a6a4a7f5bbcb6b"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.909447 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.909578 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.915559 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" event={"ID":"8d102669-be66-4bc6-8328-3e7d8a66f4c1","Type":"ContainerStarted","Data":"a0e13835435830633118c922c56a494f870bc8f974a8103fc3c5fe3034486678"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.917473 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.923826 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" event={"ID":"ba47d9b1-160d-40da-a691-db4b4e2557d5","Type":"ContainerStarted","Data":"48dd6ab816f4d3824eef797300eb8812ccda39360710da111a766bca243a94ba"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.924242 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.934761 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6b7f75547b-qbchs" podStartSLOduration=8.012013772 podStartE2EDuration="27.934738553s" podCreationTimestamp="2025-11-26 13:40:49 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.364009621 +0000 UTC m=+1034.999834703" lastFinishedPulling="2025-11-26 13:41:11.286734412 +0000 UTC m=+1054.922559484" observedRunningTime="2025-11-26 13:41:16.930990994 +0000 UTC m=+1060.566816086" watchObservedRunningTime="2025-11-26 13:41:16.934738553 +0000 UTC m=+1060.570563635" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.935068 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5d494799bf-mljjd" podStartSLOduration=7.751449331 podStartE2EDuration="26.935063154s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.103873384 +0000 UTC m=+1035.739698466" lastFinishedPulling="2025-11-26 13:41:11.287487207 +0000 UTC m=+1054.923312289" observedRunningTime="2025-11-26 13:41:16.035673159 +0000 UTC m=+1059.671498241" watchObservedRunningTime="2025-11-26 13:41:16.935063154 +0000 UTC m=+1060.570888236" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.936780 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" event={"ID":"9716d6f2-3c85-4d5d-a261-966d0e6d6dfc","Type":"ContainerStarted","Data":"3ba4134fea9f43be2bb4fd1143231ea17909f540ea189cef372b6addfcfdde79"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.936827 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" event={"ID":"9716d6f2-3c85-4d5d-a261-966d0e6d6dfc","Type":"ContainerStarted","Data":"7a0bfc1060c88be66445fbe2b4d41f56e52815d34ec9754f773e457c92d4e886"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.936870 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.937110 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.950849 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" event={"ID":"f76ff94a-b97d-4bbc-bc03-3b8df6d35095","Type":"ContainerStarted","Data":"8a27b555c06627c9b80f6ef93819865e862e5ff85c9b1ac404114aeb241b97af"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.952240 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.965381 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.981992 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" event={"ID":"92f90071-d080-4579-9a87-aef8e8b760d3","Type":"ContainerStarted","Data":"e1d90dcefd994fd1ee888ed6a41678a0599c69586cfa8c0c3e939326ae679251"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.982044 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" event={"ID":"92f90071-d080-4579-9a87-aef8e8b760d3","Type":"ContainerStarted","Data":"440b7795529bdaf065c5cff6827dbbf2961c81c1b6aa11cfb840fa9c2cc405e2"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.982295 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.986100 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" event={"ID":"d84948ad-d0e9-4d86-97a7-1a0d9e13d858","Type":"ContainerStarted","Data":"3fb552517de4772eaba99018946a4b46e0df31c614405bf068c91552ce199b76"} Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.986922 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" Nov 26 13:41:16 crc kubenswrapper[4695]: I1126 13:41:16.996684 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.006068 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" event={"ID":"dbf58d06-6729-4a3e-8682-641649f1ecd2","Type":"ContainerStarted","Data":"865b727f39c3b12a1f5b3f8a24640488d86400ec61dbaa5d3c7fda607453f801"} Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.006252 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.012012 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-56897c768d-chvvt" podStartSLOduration=8.587540941 podStartE2EDuration="27.01198648s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.279592749 +0000 UTC m=+1035.915417831" lastFinishedPulling="2025-11-26 13:41:10.704038288 +0000 UTC m=+1054.339863370" observedRunningTime="2025-11-26 13:41:16.994877619 +0000 UTC m=+1060.630702711" watchObservedRunningTime="2025-11-26 13:41:17.01198648 +0000 UTC m=+1060.647811562" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.052694 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.058825 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" event={"ID":"5c7bfa9c-0c31-4ece-915a-c4e4d37fadad","Type":"ContainerStarted","Data":"ecfcf9798f1cbb5fcc5c318efaa3e837eb4846439dd61a31a18fe55c738388e0"} Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.064449 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.088457 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-57988cc5b5-j27xp" podStartSLOduration=8.098208682 podStartE2EDuration="27.088425891s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.267827876 +0000 UTC m=+1035.903652958" lastFinishedPulling="2025-11-26 13:41:11.258045085 +0000 UTC m=+1054.893870167" observedRunningTime="2025-11-26 13:41:17.08271792 +0000 UTC m=+1060.718543002" watchObservedRunningTime="2025-11-26 13:41:17.088425891 +0000 UTC m=+1060.724250973" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.096687 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.098636 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" event={"ID":"b87cee59-5442-46a0-b5d2-8467196ceedf","Type":"ContainerStarted","Data":"1f31e385d282db0cc78d87b50e67577922915f558fda7e5693afd7d456234f54"} Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.119976 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" event={"ID":"c66a73b5-1103-497f-87a6-70d964111fc9","Type":"ContainerStarted","Data":"337695a16d2fd9b52a05560505fc694334b550ff09e57eef65656da648fbe418"} Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.120027 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.146463 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-6fdcddb789-rbpf5" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.250387 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5b77f656f-s22qh" podStartSLOduration=12.634506424 podStartE2EDuration="27.250339289s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.511810862 +0000 UTC m=+1035.147635944" lastFinishedPulling="2025-11-26 13:41:06.127643727 +0000 UTC m=+1049.763468809" observedRunningTime="2025-11-26 13:41:17.239974311 +0000 UTC m=+1060.875799393" watchObservedRunningTime="2025-11-26 13:41:17.250339289 +0000 UTC m=+1060.886164371" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.251430 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-5qjxh" podStartSLOduration=7.915427413 podStartE2EDuration="27.251422433s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.922804648 +0000 UTC m=+1035.558629730" lastFinishedPulling="2025-11-26 13:41:11.258799678 +0000 UTC m=+1054.894624750" observedRunningTime="2025-11-26 13:41:17.182401078 +0000 UTC m=+1060.818226180" watchObservedRunningTime="2025-11-26 13:41:17.251422433 +0000 UTC m=+1060.887247515" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.299554 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" podStartSLOduration=4.228215495 podStartE2EDuration="27.299529607s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.532646253 +0000 UTC m=+1036.168471335" lastFinishedPulling="2025-11-26 13:41:15.603960375 +0000 UTC m=+1059.239785447" observedRunningTime="2025-11-26 13:41:17.295703726 +0000 UTC m=+1060.931528828" watchObservedRunningTime="2025-11-26 13:41:17.299529607 +0000 UTC m=+1060.935354689" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.367393 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-67cb4dc6d4-vk52w" podStartSLOduration=7.7335125940000005 podStartE2EDuration="27.367303894s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.826678225 +0000 UTC m=+1035.462503307" lastFinishedPulling="2025-11-26 13:41:11.460469525 +0000 UTC m=+1055.096294607" observedRunningTime="2025-11-26 13:41:17.355223241 +0000 UTC m=+1060.991048323" watchObservedRunningTime="2025-11-26 13:41:17.367303894 +0000 UTC m=+1061.003128976" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.462231 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" podStartSLOduration=4.438978599 podStartE2EDuration="27.462206639s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.605547112 +0000 UTC m=+1036.241372194" lastFinishedPulling="2025-11-26 13:41:15.628775152 +0000 UTC m=+1059.264600234" observedRunningTime="2025-11-26 13:41:17.447883646 +0000 UTC m=+1061.083708738" watchObservedRunningTime="2025-11-26 13:41:17.462206639 +0000 UTC m=+1061.098031721" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.494734 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-955677c94-twcbz" podStartSLOduration=7.56892936 podStartE2EDuration="27.494707658s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.363921418 +0000 UTC m=+1034.999746500" lastFinishedPulling="2025-11-26 13:41:11.289699716 +0000 UTC m=+1054.925524798" observedRunningTime="2025-11-26 13:41:17.490912578 +0000 UTC m=+1061.126737670" watchObservedRunningTime="2025-11-26 13:41:17.494707658 +0000 UTC m=+1061.130532740" Nov 26 13:41:17 crc kubenswrapper[4695]: I1126 13:41:17.562646 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" podStartSLOduration=18.583601641 podStartE2EDuration="27.562608979s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:41:06.69008292 +0000 UTC m=+1050.325908012" lastFinishedPulling="2025-11-26 13:41:15.669090268 +0000 UTC m=+1059.304915350" observedRunningTime="2025-11-26 13:41:17.547861052 +0000 UTC m=+1061.183686144" watchObservedRunningTime="2025-11-26 13:41:17.562608979 +0000 UTC m=+1061.198434061" Nov 26 13:41:18 crc kubenswrapper[4695]: I1126 13:41:18.128427 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" event={"ID":"b87cee59-5442-46a0-b5d2-8467196ceedf","Type":"ContainerStarted","Data":"3fb46e8767b3ae9db8f68a0e63a53f0b2bdf289303d9d6a00ecfb4cd3a0ba06f"} Nov 26 13:41:18 crc kubenswrapper[4695]: I1126 13:41:18.128563 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" Nov 26 13:41:18 crc kubenswrapper[4695]: I1126 13:41:18.132628 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" event={"ID":"c66a73b5-1103-497f-87a6-70d964111fc9","Type":"ContainerStarted","Data":"c2ebb58728128abcb6f5ae11d4031a657f4467f93df872768afb0f0bda8aea50"} Nov 26 13:41:18 crc kubenswrapper[4695]: I1126 13:41:18.141760 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" event={"ID":"8d102669-be66-4bc6-8328-3e7d8a66f4c1","Type":"ContainerStarted","Data":"39a813c9fec17fd70cb54bc8c6f16b119a9fa9bd48dcfea7729a10f82d16a0d1"} Nov 26 13:41:18 crc kubenswrapper[4695]: I1126 13:41:18.158719 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" podStartSLOduration=5.018504915 podStartE2EDuration="28.158686878s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.528938416 +0000 UTC m=+1036.164763498" lastFinishedPulling="2025-11-26 13:41:15.669120369 +0000 UTC m=+1059.304945461" observedRunningTime="2025-11-26 13:41:18.152247563 +0000 UTC m=+1061.788072645" watchObservedRunningTime="2025-11-26 13:41:18.158686878 +0000 UTC m=+1061.794511970" Nov 26 13:41:18 crc kubenswrapper[4695]: I1126 13:41:18.190643 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" podStartSLOduration=2.362514734 podStartE2EDuration="28.190615958s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:51.825318251 +0000 UTC m=+1035.461143323" lastFinishedPulling="2025-11-26 13:41:17.653419465 +0000 UTC m=+1061.289244547" observedRunningTime="2025-11-26 13:41:18.186817279 +0000 UTC m=+1061.822642461" watchObservedRunningTime="2025-11-26 13:41:18.190615958 +0000 UTC m=+1061.826441040" Nov 26 13:41:19 crc kubenswrapper[4695]: I1126 13:41:19.151611 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" Nov 26 13:41:20 crc kubenswrapper[4695]: I1126 13:41:20.174500 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" event={"ID":"b044c065-4ba3-4390-88c9-340e2fc1ba2f","Type":"ContainerStarted","Data":"73f42a9f4be369e0e3b0622a7baf1a1b2b53959fa802dd7a07c9fe6c302d644b"} Nov 26 13:41:20 crc kubenswrapper[4695]: I1126 13:41:20.175044 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" Nov 26 13:41:20 crc kubenswrapper[4695]: I1126 13:41:20.193489 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" podStartSLOduration=3.327494087 podStartE2EDuration="30.193467392s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:40:52.5426513 +0000 UTC m=+1036.178476382" lastFinishedPulling="2025-11-26 13:41:19.408624605 +0000 UTC m=+1063.044449687" observedRunningTime="2025-11-26 13:41:20.191028845 +0000 UTC m=+1063.826853937" watchObservedRunningTime="2025-11-26 13:41:20.193467392 +0000 UTC m=+1063.829292474" Nov 26 13:41:20 crc kubenswrapper[4695]: I1126 13:41:20.879760 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5d499bf58b-n8tkz" Nov 26 13:41:21 crc kubenswrapper[4695]: I1126 13:41:21.185390 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-64cdc6ff96-s9mbz" Nov 26 13:41:21 crc kubenswrapper[4695]: I1126 13:41:21.404188 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-vdrzg" Nov 26 13:41:21 crc kubenswrapper[4695]: I1126 13:41:21.423856 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-656dcb59d4-jsmdt" Nov 26 13:41:21 crc kubenswrapper[4695]: I1126 13:41:21.425674 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cd6c7f4c8-z6584" Nov 26 13:41:22 crc kubenswrapper[4695]: I1126 13:41:22.742986 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:41:22 crc kubenswrapper[4695]: I1126 13:41:22.753837 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/071300a7-9f99-4e3f-8fd7-ceabb7ba738d-cert\") pod \"openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f\" (UID: \"071300a7-9f99-4e3f-8fd7-ceabb7ba738d\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:41:22 crc kubenswrapper[4695]: I1126 13:41:22.785859 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:41:23 crc kubenswrapper[4695]: I1126 13:41:23.273526 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f"] Nov 26 13:41:23 crc kubenswrapper[4695]: W1126 13:41:23.279217 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod071300a7_9f99_4e3f_8fd7_ceabb7ba738d.slice/crio-8bf089a06f612d24d7c71890da9377a22f4ba97a3867f91436ffdc6c1272a8ef WatchSource:0}: Error finding container 8bf089a06f612d24d7c71890da9377a22f4ba97a3867f91436ffdc6c1272a8ef: Status 404 returned error can't find the container with id 8bf089a06f612d24d7c71890da9377a22f4ba97a3867f91436ffdc6c1272a8ef Nov 26 13:41:24 crc kubenswrapper[4695]: I1126 13:41:24.206891 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" event={"ID":"071300a7-9f99-4e3f-8fd7-ceabb7ba738d","Type":"ContainerStarted","Data":"8bf089a06f612d24d7c71890da9377a22f4ba97a3867f91436ffdc6c1272a8ef"} Nov 26 13:41:27 crc kubenswrapper[4695]: I1126 13:41:27.345596 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6bcd57bc9d-w2r7r" Nov 26 13:41:28 crc kubenswrapper[4695]: I1126 13:41:28.274167 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-v2jnr" Nov 26 13:41:30 crc kubenswrapper[4695]: I1126 13:41:30.843505 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7b4567c7cf-ndnf7" Nov 26 13:41:31 crc kubenswrapper[4695]: I1126 13:41:31.376234 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-d77b94747-t7sqd" Nov 26 13:41:34 crc kubenswrapper[4695]: E1126 13:41:34.481378 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1367616521/1\": happened during read: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41" Nov 26 13:41:34 crc kubenswrapper[4695]: E1126 13:41:34.482299 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flj4x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f_openstack-operators(071300a7-9f99-4e3f-8fd7-ceabb7ba738d): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1367616521/1\": happened during read: context canceled" logger="UnhandledError" Nov 26 13:41:36 crc kubenswrapper[4695]: E1126 13:41:36.148543 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1367616521/1\\\": happened during read: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" podUID="071300a7-9f99-4e3f-8fd7-ceabb7ba738d" Nov 26 13:41:36 crc kubenswrapper[4695]: I1126 13:41:36.303748 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" event={"ID":"071300a7-9f99-4e3f-8fd7-ceabb7ba738d","Type":"ContainerStarted","Data":"e30f63e36de47c246cd35a8c56b768d3019641b754e5e2d952ed4294c47ab7bb"} Nov 26 13:41:36 crc kubenswrapper[4695]: E1126 13:41:36.305296 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" podUID="071300a7-9f99-4e3f-8fd7-ceabb7ba738d" Nov 26 13:41:37 crc kubenswrapper[4695]: E1126 13:41:37.313653 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:51a478c52d9012c08743f63b44a3721c7ff7a0599ba9c2cf89ad54ea41b19e41\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" podUID="071300a7-9f99-4e3f-8fd7-ceabb7ba738d" Nov 26 13:41:53 crc kubenswrapper[4695]: I1126 13:41:53.432634 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" event={"ID":"071300a7-9f99-4e3f-8fd7-ceabb7ba738d","Type":"ContainerStarted","Data":"47987f3ebf8f43ad7a7812ff9c2b75d80093dfdecfa9dec755cd60258138fb01"} Nov 26 13:41:53 crc kubenswrapper[4695]: I1126 13:41:53.433257 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:41:53 crc kubenswrapper[4695]: I1126 13:41:53.461024 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" podStartSLOduration=34.0668701 podStartE2EDuration="1m3.461001774s" podCreationTimestamp="2025-11-26 13:40:50 +0000 UTC" firstStartedPulling="2025-11-26 13:41:23.281562287 +0000 UTC m=+1066.917387369" lastFinishedPulling="2025-11-26 13:41:52.675693961 +0000 UTC m=+1096.311519043" observedRunningTime="2025-11-26 13:41:53.45551248 +0000 UTC m=+1097.091337562" watchObservedRunningTime="2025-11-26 13:41:53.461001774 +0000 UTC m=+1097.096826856" Nov 26 13:42:02 crc kubenswrapper[4695]: I1126 13:42:02.792461 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f" Nov 26 13:42:06 crc kubenswrapper[4695]: I1126 13:42:06.397014 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:42:06 crc kubenswrapper[4695]: I1126 13:42:06.397527 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.914428 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vg9ml"] Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.917185 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.919496 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.919728 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.920060 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-l96jh" Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.920577 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.936087 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vg9ml"] Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.991971 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m2n5z"] Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.993846 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:18 crc kubenswrapper[4695]: I1126 13:42:18.999022 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.007155 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m2n5z"] Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.024515 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-config\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.024586 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbkrw\" (UniqueName: \"kubernetes.io/projected/9f13afbc-253a-4103-a034-c740630a53b6-kube-api-access-mbkrw\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.024632 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.024676 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hsgp\" (UniqueName: \"kubernetes.io/projected/02b12044-0911-4dd6-b980-755d9c39ba91-kube-api-access-2hsgp\") pod \"dnsmasq-dns-675f4bcbfc-vg9ml\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.024734 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b12044-0911-4dd6-b980-755d9c39ba91-config\") pod \"dnsmasq-dns-675f4bcbfc-vg9ml\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.126884 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hsgp\" (UniqueName: \"kubernetes.io/projected/02b12044-0911-4dd6-b980-755d9c39ba91-kube-api-access-2hsgp\") pod \"dnsmasq-dns-675f4bcbfc-vg9ml\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.127482 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b12044-0911-4dd6-b980-755d9c39ba91-config\") pod \"dnsmasq-dns-675f4bcbfc-vg9ml\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.127523 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-config\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.130607 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b12044-0911-4dd6-b980-755d9c39ba91-config\") pod \"dnsmasq-dns-675f4bcbfc-vg9ml\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.130978 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-config\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.127560 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbkrw\" (UniqueName: \"kubernetes.io/projected/9f13afbc-253a-4103-a034-c740630a53b6-kube-api-access-mbkrw\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.131158 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.132161 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.181771 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbkrw\" (UniqueName: \"kubernetes.io/projected/9f13afbc-253a-4103-a034-c740630a53b6-kube-api-access-mbkrw\") pod \"dnsmasq-dns-78dd6ddcc-m2n5z\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.182877 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hsgp\" (UniqueName: \"kubernetes.io/projected/02b12044-0911-4dd6-b980-755d9c39ba91-kube-api-access-2hsgp\") pod \"dnsmasq-dns-675f4bcbfc-vg9ml\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.250636 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.317792 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.768110 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vg9ml"] Nov 26 13:42:19 crc kubenswrapper[4695]: I1126 13:42:19.840214 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m2n5z"] Nov 26 13:42:19 crc kubenswrapper[4695]: W1126 13:42:19.843394 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f13afbc_253a_4103_a034_c740630a53b6.slice/crio-68f778f0df1053b602009b892099a2afe3f55bc6fcb6ff9d4301123379bcc284 WatchSource:0}: Error finding container 68f778f0df1053b602009b892099a2afe3f55bc6fcb6ff9d4301123379bcc284: Status 404 returned error can't find the container with id 68f778f0df1053b602009b892099a2afe3f55bc6fcb6ff9d4301123379bcc284 Nov 26 13:42:20 crc kubenswrapper[4695]: I1126 13:42:20.650385 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" event={"ID":"02b12044-0911-4dd6-b980-755d9c39ba91","Type":"ContainerStarted","Data":"2c098d73327566cb226442f3ae81b4e46fe1116c8f559d8028c97c277de1685e"} Nov 26 13:42:20 crc kubenswrapper[4695]: I1126 13:42:20.651809 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" event={"ID":"9f13afbc-253a-4103-a034-c740630a53b6","Type":"ContainerStarted","Data":"68f778f0df1053b602009b892099a2afe3f55bc6fcb6ff9d4301123379bcc284"} Nov 26 13:42:21 crc kubenswrapper[4695]: I1126 13:42:21.990701 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vg9ml"] Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.021447 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zz9gr"] Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.023256 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.036062 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zz9gr"] Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.089154 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghmk6\" (UniqueName: \"kubernetes.io/projected/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-kube-api-access-ghmk6\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.089235 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.089398 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-config\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.190647 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghmk6\" (UniqueName: \"kubernetes.io/projected/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-kube-api-access-ghmk6\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.190745 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.190834 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-config\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.193096 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-config\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.193405 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-dns-svc\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.216803 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghmk6\" (UniqueName: \"kubernetes.io/projected/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-kube-api-access-ghmk6\") pod \"dnsmasq-dns-666b6646f7-zz9gr\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.371191 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.393116 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m2n5z"] Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.441825 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mlxwb"] Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.444881 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.458042 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mlxwb"] Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.496626 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.496690 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-config\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.496744 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qf8c\" (UniqueName: \"kubernetes.io/projected/4ce3ac6a-a69e-4066-a452-043e99787c41-kube-api-access-8qf8c\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.598528 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qf8c\" (UniqueName: \"kubernetes.io/projected/4ce3ac6a-a69e-4066-a452-043e99787c41-kube-api-access-8qf8c\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.600639 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.600690 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-config\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.601651 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-config\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.602199 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.629426 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qf8c\" (UniqueName: \"kubernetes.io/projected/4ce3ac6a-a69e-4066-a452-043e99787c41-kube-api-access-8qf8c\") pod \"dnsmasq-dns-57d769cc4f-mlxwb\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.791922 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:22 crc kubenswrapper[4695]: I1126 13:42:22.997418 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zz9gr"] Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.214993 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.216234 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.219464 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.219694 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.219854 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.219971 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.220123 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.220233 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5rpqq" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.220340 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.229364 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.309626 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.309691 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310002 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310088 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310174 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27495d77-50c6-4476-86c3-dafb0e5dbb97-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310291 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310312 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27495d77-50c6-4476-86c3-dafb0e5dbb97-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310446 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310520 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-config-data\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.310557 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.311170 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbvz4\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-kube-api-access-dbvz4\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.412836 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.412911 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbvz4\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-kube-api-access-dbvz4\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.412966 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413004 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413070 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413099 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413128 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27495d77-50c6-4476-86c3-dafb0e5dbb97-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413172 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413196 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27495d77-50c6-4476-86c3-dafb0e5dbb97-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413226 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413253 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-config-data\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.413965 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.414911 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-server-conf\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.415420 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.415619 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.418729 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-config-data\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.419506 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.419744 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.421992 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.428585 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27495d77-50c6-4476-86c3-dafb0e5dbb97-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.430317 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27495d77-50c6-4476-86c3-dafb0e5dbb97-pod-info\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.434550 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbvz4\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-kube-api-access-dbvz4\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.445029 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mlxwb"] Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.490941 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.576292 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.583469 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.584755 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.589546 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.589666 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.590911 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.591085 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.591152 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.591484 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ws9b2" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.592740 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.622822 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.622872 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.622893 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.622911 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.622945 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.622966 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.622990 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a771ad6-98a9-474e-83f0-e17fecdee9be-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.623010 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.623025 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgk6r\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-kube-api-access-zgk6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.623045 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.623077 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a771ad6-98a9-474e-83f0-e17fecdee9be-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.638907 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.704546 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" event={"ID":"4ce3ac6a-a69e-4066-a452-043e99787c41","Type":"ContainerStarted","Data":"0097f4012592a910fb2d1a90c463ec6e9bb19a44b591057fa7515e490a9eb1d0"} Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.707665 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" event={"ID":"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940","Type":"ContainerStarted","Data":"10bd9d1fd16c3cb53480fa3ab48a184795d0891a831740a794f0749892577dbd"} Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729175 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729309 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729402 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729438 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729545 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729604 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729665 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a771ad6-98a9-474e-83f0-e17fecdee9be-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729700 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729725 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgk6r\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-kube-api-access-zgk6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729777 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.729873 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a771ad6-98a9-474e-83f0-e17fecdee9be-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.730008 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.730026 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.730530 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.730904 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.731367 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.732047 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.733921 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.734469 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a771ad6-98a9-474e-83f0-e17fecdee9be-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.735544 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.753307 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a771ad6-98a9-474e-83f0-e17fecdee9be-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.754322 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgk6r\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-kube-api-access-zgk6r\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.770765 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.926951 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:42:23 crc kubenswrapper[4695]: I1126 13:42:23.979632 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.901893 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.904144 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.914110 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.916402 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.916712 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.918668 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-h8fwh" Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.918802 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 26 13:42:24 crc kubenswrapper[4695]: I1126 13:42:24.922414 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054250 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-config-data-default\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054370 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054419 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/82b6b21a-6ed0-43d7-9763-684eca59aa29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054484 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b6b21a-6ed0-43d7-9763-684eca59aa29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054570 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054597 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-kolla-config\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054654 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlbk5\" (UniqueName: \"kubernetes.io/projected/82b6b21a-6ed0-43d7-9763-684eca59aa29-kube-api-access-vlbk5\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.054676 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b6b21a-6ed0-43d7-9763-684eca59aa29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158326 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158413 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-kolla-config\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158492 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlbk5\" (UniqueName: \"kubernetes.io/projected/82b6b21a-6ed0-43d7-9763-684eca59aa29-kube-api-access-vlbk5\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158520 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b6b21a-6ed0-43d7-9763-684eca59aa29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158562 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-config-data-default\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158609 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158651 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/82b6b21a-6ed0-43d7-9763-684eca59aa29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.158669 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b6b21a-6ed0-43d7-9763-684eca59aa29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.161088 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/82b6b21a-6ed0-43d7-9763-684eca59aa29-config-data-generated\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.161094 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.161813 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-kolla-config\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.163032 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-operator-scripts\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.177907 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/82b6b21a-6ed0-43d7-9763-684eca59aa29-config-data-default\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.180052 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/82b6b21a-6ed0-43d7-9763-684eca59aa29-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.183837 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlbk5\" (UniqueName: \"kubernetes.io/projected/82b6b21a-6ed0-43d7-9763-684eca59aa29-kube-api-access-vlbk5\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.185838 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82b6b21a-6ed0-43d7-9763-684eca59aa29-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.192844 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"82b6b21a-6ed0-43d7-9763-684eca59aa29\") " pod="openstack/openstack-galera-0" Nov 26 13:42:25 crc kubenswrapper[4695]: I1126 13:42:25.253063 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.248598 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.250831 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.253090 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.253195 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-g8qtd" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.253399 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.253521 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.260055 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382431 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b02f07d7-7406-4602-b166-911408fe8bf0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382502 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382552 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382584 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382613 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xljx\" (UniqueName: \"kubernetes.io/projected/b02f07d7-7406-4602-b166-911408fe8bf0-kube-api-access-2xljx\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382638 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382704 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b02f07d7-7406-4602-b166-911408fe8bf0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.382725 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b02f07d7-7406-4602-b166-911408fe8bf0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484280 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484334 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484377 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xljx\" (UniqueName: \"kubernetes.io/projected/b02f07d7-7406-4602-b166-911408fe8bf0-kube-api-access-2xljx\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484400 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484437 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b02f07d7-7406-4602-b166-911408fe8bf0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484456 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b02f07d7-7406-4602-b166-911408fe8bf0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484494 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b02f07d7-7406-4602-b166-911408fe8bf0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484521 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.484946 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.485374 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.488967 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b02f07d7-7406-4602-b166-911408fe8bf0-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.504868 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.505964 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b02f07d7-7406-4602-b166-911408fe8bf0-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.506916 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b02f07d7-7406-4602-b166-911408fe8bf0-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.507145 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b02f07d7-7406-4602-b166-911408fe8bf0-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.511197 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.512435 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.514490 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.514642 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-c4zf6" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.515291 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.529913 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xljx\" (UniqueName: \"kubernetes.io/projected/b02f07d7-7406-4602-b166-911408fe8bf0-kube-api-access-2xljx\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.531864 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.572583 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b02f07d7-7406-4602-b166-911408fe8bf0\") " pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.588369 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ed69b1-d83c-4967-a627-6e52dc6da41b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.588474 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95ed69b1-d83c-4967-a627-6e52dc6da41b-config-data\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.588546 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ed69b1-d83c-4967-a627-6e52dc6da41b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.588586 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95ed69b1-d83c-4967-a627-6e52dc6da41b-kolla-config\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.588633 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47mbl\" (UniqueName: \"kubernetes.io/projected/95ed69b1-d83c-4967-a627-6e52dc6da41b-kube-api-access-47mbl\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.624237 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.690672 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95ed69b1-d83c-4967-a627-6e52dc6da41b-config-data\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.690753 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ed69b1-d83c-4967-a627-6e52dc6da41b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.690784 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95ed69b1-d83c-4967-a627-6e52dc6da41b-kolla-config\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.690855 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47mbl\" (UniqueName: \"kubernetes.io/projected/95ed69b1-d83c-4967-a627-6e52dc6da41b-kube-api-access-47mbl\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.690927 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ed69b1-d83c-4967-a627-6e52dc6da41b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.693608 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/95ed69b1-d83c-4967-a627-6e52dc6da41b-config-data\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.693805 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/95ed69b1-d83c-4967-a627-6e52dc6da41b-kolla-config\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.698038 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/95ed69b1-d83c-4967-a627-6e52dc6da41b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.713389 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47mbl\" (UniqueName: \"kubernetes.io/projected/95ed69b1-d83c-4967-a627-6e52dc6da41b-kube-api-access-47mbl\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.729405 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/95ed69b1-d83c-4967-a627-6e52dc6da41b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"95ed69b1-d83c-4967-a627-6e52dc6da41b\") " pod="openstack/memcached-0" Nov 26 13:42:26 crc kubenswrapper[4695]: I1126 13:42:26.930493 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 26 13:42:27 crc kubenswrapper[4695]: W1126 13:42:27.568517 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod27495d77_50c6_4476_86c3_dafb0e5dbb97.slice/crio-fe6b19585934a298bdfe08e7003ca109bf968f655514a892f2e422361bb901d5 WatchSource:0}: Error finding container fe6b19585934a298bdfe08e7003ca109bf968f655514a892f2e422361bb901d5: Status 404 returned error can't find the container with id fe6b19585934a298bdfe08e7003ca109bf968f655514a892f2e422361bb901d5 Nov 26 13:42:27 crc kubenswrapper[4695]: I1126 13:42:27.752470 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27495d77-50c6-4476-86c3-dafb0e5dbb97","Type":"ContainerStarted","Data":"fe6b19585934a298bdfe08e7003ca109bf968f655514a892f2e422361bb901d5"} Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.255764 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.257098 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.260727 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zn7rw" Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.282626 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.324912 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc7db\" (UniqueName: \"kubernetes.io/projected/814d5f70-9acd-4a44-9ba1-5f6b95933a80-kube-api-access-xc7db\") pod \"kube-state-metrics-0\" (UID: \"814d5f70-9acd-4a44-9ba1-5f6b95933a80\") " pod="openstack/kube-state-metrics-0" Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.426453 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc7db\" (UniqueName: \"kubernetes.io/projected/814d5f70-9acd-4a44-9ba1-5f6b95933a80-kube-api-access-xc7db\") pod \"kube-state-metrics-0\" (UID: \"814d5f70-9acd-4a44-9ba1-5f6b95933a80\") " pod="openstack/kube-state-metrics-0" Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.445835 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc7db\" (UniqueName: \"kubernetes.io/projected/814d5f70-9acd-4a44-9ba1-5f6b95933a80-kube-api-access-xc7db\") pod \"kube-state-metrics-0\" (UID: \"814d5f70-9acd-4a44-9ba1-5f6b95933a80\") " pod="openstack/kube-state-metrics-0" Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.576039 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 13:42:28 crc kubenswrapper[4695]: I1126 13:42:28.875367 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 26 13:42:31 crc kubenswrapper[4695]: W1126 13:42:31.282282 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95ed69b1_d83c_4967_a627_6e52dc6da41b.slice/crio-05f64104b241e6f629aff9d3f3827dd93ae514275e0adb751f43cab966f29ff7 WatchSource:0}: Error finding container 05f64104b241e6f629aff9d3f3827dd93ae514275e0adb751f43cab966f29ff7: Status 404 returned error can't find the container with id 05f64104b241e6f629aff9d3f3827dd93ae514275e0adb751f43cab966f29ff7 Nov 26 13:42:31 crc kubenswrapper[4695]: I1126 13:42:31.802869 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95ed69b1-d83c-4967-a627-6e52dc6da41b","Type":"ContainerStarted","Data":"05f64104b241e6f629aff9d3f3827dd93ae514275e0adb751f43cab966f29ff7"} Nov 26 13:42:31 crc kubenswrapper[4695]: I1126 13:42:31.962195 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zvx8d"] Nov 26 13:42:31 crc kubenswrapper[4695]: I1126 13:42:31.963166 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:31 crc kubenswrapper[4695]: I1126 13:42:31.965098 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 26 13:42:31 crc kubenswrapper[4695]: I1126 13:42:31.965366 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vpqsh" Nov 26 13:42:31 crc kubenswrapper[4695]: I1126 13:42:31.965518 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 26 13:42:31 crc kubenswrapper[4695]: I1126 13:42:31.977926 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvx8d"] Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:31.999906 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rtt8r"] Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.007105 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.019851 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rtt8r"] Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.078509 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f98833b-dbaf-42bc-a424-8094e025ce87-ovn-controller-tls-certs\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.079900 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f98833b-dbaf-42bc-a424-8094e025ce87-scripts\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.080340 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-run-ovn\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.080499 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f98833b-dbaf-42bc-a424-8094e025ce87-combined-ca-bundle\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.080579 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7tzm\" (UniqueName: \"kubernetes.io/projected/9f98833b-dbaf-42bc-a424-8094e025ce87-kube-api-access-p7tzm\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.080673 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7ppt\" (UniqueName: \"kubernetes.io/projected/35f623f4-096c-4ac0-9b93-b489fda7cf09-kube-api-access-c7ppt\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.080758 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-run\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.080852 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-etc-ovs\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.080927 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-run\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.081107 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-log\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.081276 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35f623f4-096c-4ac0-9b93-b489fda7cf09-scripts\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.081417 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-lib\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.081509 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-log-ovn\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.183776 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f98833b-dbaf-42bc-a424-8094e025ce87-ovn-controller-tls-certs\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.183844 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f98833b-dbaf-42bc-a424-8094e025ce87-scripts\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.183898 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-run-ovn\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.183912 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f98833b-dbaf-42bc-a424-8094e025ce87-combined-ca-bundle\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.183929 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7tzm\" (UniqueName: \"kubernetes.io/projected/9f98833b-dbaf-42bc-a424-8094e025ce87-kube-api-access-p7tzm\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.183955 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7ppt\" (UniqueName: \"kubernetes.io/projected/35f623f4-096c-4ac0-9b93-b489fda7cf09-kube-api-access-c7ppt\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.183975 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-run\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184007 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-etc-ovs\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184029 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-run\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184068 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-log\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184122 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35f623f4-096c-4ac0-9b93-b489fda7cf09-scripts\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184162 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-lib\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184182 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-log-ovn\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184686 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-etc-ovs\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.184974 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-run-ovn\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.185540 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-run\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.185653 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-log\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.185814 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-lib\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.185875 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/9f98833b-dbaf-42bc-a424-8094e025ce87-var-log-ovn\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.186018 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/35f623f4-096c-4ac0-9b93-b489fda7cf09-var-run\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.186248 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f98833b-dbaf-42bc-a424-8094e025ce87-scripts\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.187683 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/35f623f4-096c-4ac0-9b93-b489fda7cf09-scripts\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.203636 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f98833b-dbaf-42bc-a424-8094e025ce87-ovn-controller-tls-certs\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.203662 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f98833b-dbaf-42bc-a424-8094e025ce87-combined-ca-bundle\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.205911 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7tzm\" (UniqueName: \"kubernetes.io/projected/9f98833b-dbaf-42bc-a424-8094e025ce87-kube-api-access-p7tzm\") pod \"ovn-controller-zvx8d\" (UID: \"9f98833b-dbaf-42bc-a424-8094e025ce87\") " pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.208650 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7ppt\" (UniqueName: \"kubernetes.io/projected/35f623f4-096c-4ac0-9b93-b489fda7cf09-kube-api-access-c7ppt\") pod \"ovn-controller-ovs-rtt8r\" (UID: \"35f623f4-096c-4ac0-9b93-b489fda7cf09\") " pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.299585 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.372333 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.870314 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.872309 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.880608 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.880613 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-vv9xz" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.880623 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.880685 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.881411 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 26 13:42:32 crc kubenswrapper[4695]: I1126 13:42:32.886315 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000016 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000125 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000200 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b28b52fd-d5e1-44b4-af26-9fa98d731335-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000325 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28b52fd-d5e1-44b4-af26-9fa98d731335-config\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000414 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lw2g\" (UniqueName: \"kubernetes.io/projected/b28b52fd-d5e1-44b4-af26-9fa98d731335-kube-api-access-8lw2g\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000442 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b28b52fd-d5e1-44b4-af26-9fa98d731335-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000516 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.000604 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104159 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104237 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104269 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104310 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b28b52fd-d5e1-44b4-af26-9fa98d731335-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104377 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28b52fd-d5e1-44b4-af26-9fa98d731335-config\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104400 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lw2g\" (UniqueName: \"kubernetes.io/projected/b28b52fd-d5e1-44b4-af26-9fa98d731335-kube-api-access-8lw2g\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104415 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b28b52fd-d5e1-44b4-af26-9fa98d731335-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.104443 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.105410 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.105801 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28b52fd-d5e1-44b4-af26-9fa98d731335-config\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.106951 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b28b52fd-d5e1-44b4-af26-9fa98d731335-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.107543 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b28b52fd-d5e1-44b4-af26-9fa98d731335-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.109398 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.109696 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.110468 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b28b52fd-d5e1-44b4-af26-9fa98d731335-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.122665 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lw2g\" (UniqueName: \"kubernetes.io/projected/b28b52fd-d5e1-44b4-af26-9fa98d731335-kube-api-access-8lw2g\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.140147 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b28b52fd-d5e1-44b4-af26-9fa98d731335\") " pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:33 crc kubenswrapper[4695]: I1126 13:42:33.199861 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.140197 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.141783 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.144103 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-w7n5q" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.144411 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.144449 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.146704 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.159630 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.258854 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.258983 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.259284 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.259364 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.259543 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-config\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.259679 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.259725 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.259748 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwlrw\" (UniqueName: \"kubernetes.io/projected/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-kube-api-access-qwlrw\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367247 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367319 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367411 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-config\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367473 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367508 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367531 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwlrw\" (UniqueName: \"kubernetes.io/projected/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-kube-api-access-qwlrw\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367574 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.367630 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.373519 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.374249 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.374402 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.374684 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.375163 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-config\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.380489 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.380566 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.395493 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwlrw\" (UniqueName: \"kubernetes.io/projected/2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8-kube-api-access-qwlrw\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.396972 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.397026 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8\") " pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.397039 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:42:36 crc kubenswrapper[4695]: I1126 13:42:36.461918 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 26 13:42:37 crc kubenswrapper[4695]: I1126 13:42:37.692804 4695 scope.go:117] "RemoveContainer" containerID="4015066dcee0fd5cdb2fe803a6167a04145f056f23bec48027d4adb68b677712" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.249143 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.249701 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbkrw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-m2n5z_openstack(9f13afbc-253a-4103-a034-c740630a53b6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.250993 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" podUID="9f13afbc-253a-4103-a034-c740630a53b6" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.254403 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.254817 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hsgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-vg9ml_openstack(02b12044-0911-4dd6-b980-755d9c39ba91): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.261164 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" podUID="02b12044-0911-4dd6-b980-755d9c39ba91" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.352519 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.352654 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghmk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-zz9gr_openstack(e533cd9f-f8ed-4d7c-9f3b-17e2e460c940): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.353973 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" Nov 26 13:42:40 crc kubenswrapper[4695]: I1126 13:42:40.611625 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 26 13:42:40 crc kubenswrapper[4695]: I1126 13:42:40.657841 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:42:40 crc kubenswrapper[4695]: I1126 13:42:40.710172 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 26 13:42:40 crc kubenswrapper[4695]: E1126 13:42:40.890981 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" Nov 26 13:42:45 crc kubenswrapper[4695]: I1126 13:42:45.974244 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" event={"ID":"02b12044-0911-4dd6-b980-755d9c39ba91","Type":"ContainerDied","Data":"2c098d73327566cb226442f3ae81b4e46fe1116c8f559d8028c97c277de1685e"} Nov 26 13:42:45 crc kubenswrapper[4695]: I1126 13:42:45.974814 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c098d73327566cb226442f3ae81b4e46fe1116c8f559d8028c97c277de1685e" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.005274 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.178866 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hsgp\" (UniqueName: \"kubernetes.io/projected/02b12044-0911-4dd6-b980-755d9c39ba91-kube-api-access-2hsgp\") pod \"02b12044-0911-4dd6-b980-755d9c39ba91\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.179411 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b12044-0911-4dd6-b980-755d9c39ba91-config\") pod \"02b12044-0911-4dd6-b980-755d9c39ba91\" (UID: \"02b12044-0911-4dd6-b980-755d9c39ba91\") " Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.179872 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b12044-0911-4dd6-b980-755d9c39ba91-config" (OuterVolumeSpecName: "config") pod "02b12044-0911-4dd6-b980-755d9c39ba91" (UID: "02b12044-0911-4dd6-b980-755d9c39ba91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.187813 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b12044-0911-4dd6-b980-755d9c39ba91-kube-api-access-2hsgp" (OuterVolumeSpecName: "kube-api-access-2hsgp") pod "02b12044-0911-4dd6-b980-755d9c39ba91" (UID: "02b12044-0911-4dd6-b980-755d9c39ba91"). InnerVolumeSpecName "kube-api-access-2hsgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.282067 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b12044-0911-4dd6-b980-755d9c39ba91-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.282116 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hsgp\" (UniqueName: \"kubernetes.io/projected/02b12044-0911-4dd6-b980-755d9c39ba91-kube-api-access-2hsgp\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.753805 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.895034 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbkrw\" (UniqueName: \"kubernetes.io/projected/9f13afbc-253a-4103-a034-c740630a53b6-kube-api-access-mbkrw\") pod \"9f13afbc-253a-4103-a034-c740630a53b6\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.896408 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-config" (OuterVolumeSpecName: "config") pod "9f13afbc-253a-4103-a034-c740630a53b6" (UID: "9f13afbc-253a-4103-a034-c740630a53b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.896485 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-config\") pod \"9f13afbc-253a-4103-a034-c740630a53b6\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.896582 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-dns-svc\") pod \"9f13afbc-253a-4103-a034-c740630a53b6\" (UID: \"9f13afbc-253a-4103-a034-c740630a53b6\") " Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.897197 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9f13afbc-253a-4103-a034-c740630a53b6" (UID: "9f13afbc-253a-4103-a034-c740630a53b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.898048 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.898071 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f13afbc-253a-4103-a034-c740630a53b6-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.901953 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f13afbc-253a-4103-a034-c740630a53b6-kube-api-access-mbkrw" (OuterVolumeSpecName: "kube-api-access-mbkrw") pod "9f13afbc-253a-4103-a034-c740630a53b6" (UID: "9f13afbc-253a-4103-a034-c740630a53b6"). InnerVolumeSpecName "kube-api-access-mbkrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.992101 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" event={"ID":"9f13afbc-253a-4103-a034-c740630a53b6","Type":"ContainerDied","Data":"68f778f0df1053b602009b892099a2afe3f55bc6fcb6ff9d4301123379bcc284"} Nov 26 13:42:46 crc kubenswrapper[4695]: I1126 13:42:46.992194 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-m2n5z" Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.002214 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbkrw\" (UniqueName: \"kubernetes.io/projected/9f13afbc-253a-4103-a034-c740630a53b6-kube-api-access-mbkrw\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.007577 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b02f07d7-7406-4602-b166-911408fe8bf0","Type":"ContainerStarted","Data":"41a281432e3f41e4646e008347661a2828a8da6b674ea5fb1e6042240863f849"} Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.009120 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"82b6b21a-6ed0-43d7-9763-684eca59aa29","Type":"ContainerStarted","Data":"b3048419c4f047aca876b7f2c0569a564beb3a052912d95ce135b24fbccb481d"} Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.011367 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a771ad6-98a9-474e-83f0-e17fecdee9be","Type":"ContainerStarted","Data":"9815c367d19f282af6d911c6d924611abbb3c3900864989705eaeb4fd861b3af"} Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.011441 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-vg9ml" Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.150188 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m2n5z"] Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.160378 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-m2n5z"] Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.225877 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f13afbc-253a-4103-a034-c740630a53b6" path="/var/lib/kubelet/pods/9f13afbc-253a-4103-a034-c740630a53b6/volumes" Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.226694 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vg9ml"] Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.247739 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-vg9ml"] Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.264518 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.362939 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvx8d"] Nov 26 13:42:47 crc kubenswrapper[4695]: W1126 13:42:47.385960 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f98833b_dbaf_42bc_a424_8094e025ce87.slice/crio-5cae947dda16262c0399b0bd3c3a824bc1763e4d41e1750aaa6d29fe64f178d3 WatchSource:0}: Error finding container 5cae947dda16262c0399b0bd3c3a824bc1763e4d41e1750aaa6d29fe64f178d3: Status 404 returned error can't find the container with id 5cae947dda16262c0399b0bd3c3a824bc1763e4d41e1750aaa6d29fe64f178d3 Nov 26 13:42:47 crc kubenswrapper[4695]: I1126 13:42:47.490597 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rtt8r"] Nov 26 13:42:47 crc kubenswrapper[4695]: W1126 13:42:47.497778 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35f623f4_096c_4ac0_9b93_b489fda7cf09.slice/crio-43661ac0b99d9558c10de137bb32c8ad7b99b38f300ba6381873cc3d2156ded7 WatchSource:0}: Error finding container 43661ac0b99d9558c10de137bb32c8ad7b99b38f300ba6381873cc3d2156ded7: Status 404 returned error can't find the container with id 43661ac0b99d9558c10de137bb32c8ad7b99b38f300ba6381873cc3d2156ded7 Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.021667 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rtt8r" event={"ID":"35f623f4-096c-4ac0-9b93-b489fda7cf09","Type":"ContainerStarted","Data":"43661ac0b99d9558c10de137bb32c8ad7b99b38f300ba6381873cc3d2156ded7"} Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.023292 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"814d5f70-9acd-4a44-9ba1-5f6b95933a80","Type":"ContainerStarted","Data":"bbef412344895c2cf16fff878ac007c6ae051794512a07212a7d497d1a38b8aa"} Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.025112 4695 generic.go:334] "Generic (PLEG): container finished" podID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerID="8d24d52db01665fd6c5a9faf9faa8e5d48c20218905234412d73a0d33e38e258" exitCode=0 Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.025158 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" event={"ID":"4ce3ac6a-a69e-4066-a452-043e99787c41","Type":"ContainerDied","Data":"8d24d52db01665fd6c5a9faf9faa8e5d48c20218905234412d73a0d33e38e258"} Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.028268 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"95ed69b1-d83c-4967-a627-6e52dc6da41b","Type":"ContainerStarted","Data":"da0c34bcb64ddc814ef40ac450fc746051559e6745c00bb618f5f1875c61e876"} Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.028392 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.033113 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d" event={"ID":"9f98833b-dbaf-42bc-a424-8094e025ce87","Type":"ContainerStarted","Data":"5cae947dda16262c0399b0bd3c3a824bc1763e4d41e1750aaa6d29fe64f178d3"} Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.261015 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=6.691103555 podStartE2EDuration="22.260996239s" podCreationTimestamp="2025-11-26 13:42:26 +0000 UTC" firstStartedPulling="2025-11-26 13:42:31.28855952 +0000 UTC m=+1134.924384602" lastFinishedPulling="2025-11-26 13:42:46.858452204 +0000 UTC m=+1150.494277286" observedRunningTime="2025-11-26 13:42:48.071898407 +0000 UTC m=+1151.707723489" watchObservedRunningTime="2025-11-26 13:42:48.260996239 +0000 UTC m=+1151.896821321" Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.264248 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 26 13:42:48 crc kubenswrapper[4695]: I1126 13:42:48.372105 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 26 13:42:48 crc kubenswrapper[4695]: W1126 13:42:48.388196 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a4fb1e7_a0a8_4bbb_8a3f_52da203e00d8.slice/crio-9a51e108523dd461efc5dbfbb9ccaf3fa138b626becd125e68402dffad684b74 WatchSource:0}: Error finding container 9a51e108523dd461efc5dbfbb9ccaf3fa138b626becd125e68402dffad684b74: Status 404 returned error can't find the container with id 9a51e108523dd461efc5dbfbb9ccaf3fa138b626becd125e68402dffad684b74 Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.044716 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" event={"ID":"4ce3ac6a-a69e-4066-a452-043e99787c41","Type":"ContainerStarted","Data":"6a04c893ab0c01a14032c17f29695bc84b7d2b76406c4ec9dfe4e4c694795ba0"} Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.045859 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.050944 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a771ad6-98a9-474e-83f0-e17fecdee9be","Type":"ContainerStarted","Data":"67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93"} Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.052554 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8","Type":"ContainerStarted","Data":"9a51e108523dd461efc5dbfbb9ccaf3fa138b626becd125e68402dffad684b74"} Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.054223 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b28b52fd-d5e1-44b4-af26-9fa98d731335","Type":"ContainerStarted","Data":"b3afcd0221e28adeeae4556c4ca94c092883d6b9f72a0c35f3795ffc55c34455"} Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.056750 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27495d77-50c6-4476-86c3-dafb0e5dbb97","Type":"ContainerStarted","Data":"cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854"} Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.069062 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" podStartSLOduration=3.755582995 podStartE2EDuration="27.069045282s" podCreationTimestamp="2025-11-26 13:42:22 +0000 UTC" firstStartedPulling="2025-11-26 13:42:23.541469904 +0000 UTC m=+1127.177294986" lastFinishedPulling="2025-11-26 13:42:46.854932191 +0000 UTC m=+1150.490757273" observedRunningTime="2025-11-26 13:42:49.063687371 +0000 UTC m=+1152.699512453" watchObservedRunningTime="2025-11-26 13:42:49.069045282 +0000 UTC m=+1152.704870364" Nov 26 13:42:49 crc kubenswrapper[4695]: I1126 13:42:49.175243 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b12044-0911-4dd6-b980-755d9c39ba91" path="/var/lib/kubelet/pods/02b12044-0911-4dd6-b980-755d9c39ba91/volumes" Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.111049 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"82b6b21a-6ed0-43d7-9763-684eca59aa29","Type":"ContainerStarted","Data":"f851bac8ea9c2c4b465e032ef63bff33948aeb8e5bcfbac09a1055d1660c289f"} Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.114050 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"814d5f70-9acd-4a44-9ba1-5f6b95933a80","Type":"ContainerStarted","Data":"45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726"} Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.114215 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.115838 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8","Type":"ContainerStarted","Data":"7caac9eac38ace84b8e69929e877e0d7fa52efcfd11b2d4c0c1db72b214450c4"} Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.117357 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d" event={"ID":"9f98833b-dbaf-42bc-a424-8094e025ce87","Type":"ContainerStarted","Data":"dd10bf0a7769d7b8e8461899f863c67f14920bf89a6ed31e6f66760c26f51628"} Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.117445 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-zvx8d" Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.120111 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b28b52fd-d5e1-44b4-af26-9fa98d731335","Type":"ContainerStarted","Data":"bd3d1b003c35c0be833de79161e2a5bdafbb384598d4b05edb77c336a54f31c9"} Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.122025 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rtt8r" event={"ID":"35f623f4-096c-4ac0-9b93-b489fda7cf09","Type":"ContainerStarted","Data":"be4e6f88915ec43329595e1b46488f3461fe43a4d6f6556397f2f53e6f10b96e"} Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.123702 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b02f07d7-7406-4602-b166-911408fe8bf0","Type":"ContainerStarted","Data":"dceb4a1df0f5bce84a1b96bdd4d00ab0a14ffba5581ab89095ae2252c25bc08d"} Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.164730 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zvx8d" podStartSLOduration=17.99927444 podStartE2EDuration="24.164709286s" podCreationTimestamp="2025-11-26 13:42:31 +0000 UTC" firstStartedPulling="2025-11-26 13:42:47.401887889 +0000 UTC m=+1151.037712971" lastFinishedPulling="2025-11-26 13:42:53.567322735 +0000 UTC m=+1157.203147817" observedRunningTime="2025-11-26 13:42:55.160779111 +0000 UTC m=+1158.796604203" watchObservedRunningTime="2025-11-26 13:42:55.164709286 +0000 UTC m=+1158.800534378" Nov 26 13:42:55 crc kubenswrapper[4695]: I1126 13:42:55.187955 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.624371224 podStartE2EDuration="27.187935515s" podCreationTimestamp="2025-11-26 13:42:28 +0000 UTC" firstStartedPulling="2025-11-26 13:42:47.202730987 +0000 UTC m=+1150.838556059" lastFinishedPulling="2025-11-26 13:42:54.766295268 +0000 UTC m=+1158.402120350" observedRunningTime="2025-11-26 13:42:55.183256016 +0000 UTC m=+1158.819081108" watchObservedRunningTime="2025-11-26 13:42:55.187935515 +0000 UTC m=+1158.823760597" Nov 26 13:42:56 crc kubenswrapper[4695]: I1126 13:42:56.132061 4695 generic.go:334] "Generic (PLEG): container finished" podID="35f623f4-096c-4ac0-9b93-b489fda7cf09" containerID="be4e6f88915ec43329595e1b46488f3461fe43a4d6f6556397f2f53e6f10b96e" exitCode=0 Nov 26 13:42:56 crc kubenswrapper[4695]: I1126 13:42:56.132109 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rtt8r" event={"ID":"35f623f4-096c-4ac0-9b93-b489fda7cf09","Type":"ContainerDied","Data":"be4e6f88915ec43329595e1b46488f3461fe43a4d6f6556397f2f53e6f10b96e"} Nov 26 13:42:56 crc kubenswrapper[4695]: I1126 13:42:56.931885 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 26 13:42:57 crc kubenswrapper[4695]: I1126 13:42:57.139788 4695 generic.go:334] "Generic (PLEG): container finished" podID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerID="03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a" exitCode=0 Nov 26 13:42:57 crc kubenswrapper[4695]: I1126 13:42:57.139848 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" event={"ID":"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940","Type":"ContainerDied","Data":"03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a"} Nov 26 13:42:57 crc kubenswrapper[4695]: I1126 13:42:57.142420 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rtt8r" event={"ID":"35f623f4-096c-4ac0-9b93-b489fda7cf09","Type":"ContainerStarted","Data":"dab0a5d288da3d83e309d0343d2d2d6340a8b017d55d9d354e7ecdc4a7a4c0d7"} Nov 26 13:42:57 crc kubenswrapper[4695]: I1126 13:42:57.795037 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:42:57 crc kubenswrapper[4695]: I1126 13:42:57.850311 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zz9gr"] Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.636765 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-bp5wg"] Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.640480 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.662376 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-bp5wg"] Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.703476 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.703547 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-config\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.703623 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5l7\" (UniqueName: \"kubernetes.io/projected/6722cf2e-4c5c-4fd2-a307-84855131e0c2-kube-api-access-qp5l7\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.807407 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.807469 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-config\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.807530 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp5l7\" (UniqueName: \"kubernetes.io/projected/6722cf2e-4c5c-4fd2-a307-84855131e0c2-kube-api-access-qp5l7\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.808623 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.809116 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-config\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.835334 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp5l7\" (UniqueName: \"kubernetes.io/projected/6722cf2e-4c5c-4fd2-a307-84855131e0c2-kube-api-access-qp5l7\") pod \"dnsmasq-dns-7cb5889db5-bp5wg\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:58 crc kubenswrapper[4695]: I1126 13:42:58.984125 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.172625 4695 generic.go:334] "Generic (PLEG): container finished" podID="b02f07d7-7406-4602-b166-911408fe8bf0" containerID="dceb4a1df0f5bce84a1b96bdd4d00ab0a14ffba5581ab89095ae2252c25bc08d" exitCode=0 Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.177073 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.177120 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8","Type":"ContainerStarted","Data":"4a9da3e9fd611017035a5f04548b2aca57613b505289c527d4c8a010a0f26014"} Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.177144 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b28b52fd-d5e1-44b4-af26-9fa98d731335","Type":"ContainerStarted","Data":"5c1463e717288fe62cb40e09d9247aa8ef9e87b3d3f64011e2d31fa56b087faa"} Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.177158 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rtt8r" event={"ID":"35f623f4-096c-4ac0-9b93-b489fda7cf09","Type":"ContainerStarted","Data":"da902d9210af3faf73e73083211f6335e79ffdeea037c2ce748a3f794e9af022"} Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.177174 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b02f07d7-7406-4602-b166-911408fe8bf0","Type":"ContainerDied","Data":"dceb4a1df0f5bce84a1b96bdd4d00ab0a14ffba5581ab89095ae2252c25bc08d"} Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.184548 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" event={"ID":"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940","Type":"ContainerStarted","Data":"a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8"} Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.184613 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerName="dnsmasq-dns" containerID="cri-o://a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8" gracePeriod=10 Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.184699 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.187309 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.36694395 podStartE2EDuration="24.187255489s" podCreationTimestamp="2025-11-26 13:42:35 +0000 UTC" firstStartedPulling="2025-11-26 13:42:48.391600228 +0000 UTC m=+1152.027425310" lastFinishedPulling="2025-11-26 13:42:58.211911767 +0000 UTC m=+1161.847736849" observedRunningTime="2025-11-26 13:42:59.180050369 +0000 UTC m=+1162.815875461" watchObservedRunningTime="2025-11-26 13:42:59.187255489 +0000 UTC m=+1162.823080571" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.191870 4695 generic.go:334] "Generic (PLEG): container finished" podID="82b6b21a-6ed0-43d7-9763-684eca59aa29" containerID="f851bac8ea9c2c4b465e032ef63bff33948aeb8e5bcfbac09a1055d1660c289f" exitCode=0 Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.191925 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"82b6b21a-6ed0-43d7-9763-684eca59aa29","Type":"ContainerDied","Data":"f851bac8ea9c2c4b465e032ef63bff33948aeb8e5bcfbac09a1055d1660c289f"} Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.231509 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=18.328731342 podStartE2EDuration="28.231490187s" podCreationTimestamp="2025-11-26 13:42:31 +0000 UTC" firstStartedPulling="2025-11-26 13:42:48.284926311 +0000 UTC m=+1151.920751393" lastFinishedPulling="2025-11-26 13:42:58.187685156 +0000 UTC m=+1161.823510238" observedRunningTime="2025-11-26 13:42:59.226401776 +0000 UTC m=+1162.862226858" watchObservedRunningTime="2025-11-26 13:42:59.231490187 +0000 UTC m=+1162.867315269" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.276265 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rtt8r" podStartSLOduration=22.209618874 podStartE2EDuration="28.276247033s" podCreationTimestamp="2025-11-26 13:42:31 +0000 UTC" firstStartedPulling="2025-11-26 13:42:47.500693246 +0000 UTC m=+1151.136518328" lastFinishedPulling="2025-11-26 13:42:53.567321415 +0000 UTC m=+1157.203146487" observedRunningTime="2025-11-26 13:42:59.248820819 +0000 UTC m=+1162.884645901" watchObservedRunningTime="2025-11-26 13:42:59.276247033 +0000 UTC m=+1162.912072115" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.289748 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" podStartSLOduration=-9223371998.565046 podStartE2EDuration="38.289729562s" podCreationTimestamp="2025-11-26 13:42:21 +0000 UTC" firstStartedPulling="2025-11-26 13:42:23.019629566 +0000 UTC m=+1126.655454648" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:42:59.288816542 +0000 UTC m=+1162.924641644" watchObservedRunningTime="2025-11-26 13:42:59.289729562 +0000 UTC m=+1162.925554644" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.426266 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-bp5wg"] Nov 26 13:42:59 crc kubenswrapper[4695]: W1126 13:42:59.452716 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6722cf2e_4c5c_4fd2_a307_84855131e0c2.slice/crio-979a782c0ab8eb22453c0f9165d84f012c0c5eef669594b4ed72a20928b8ba59 WatchSource:0}: Error finding container 979a782c0ab8eb22453c0f9165d84f012c0c5eef669594b4ed72a20928b8ba59: Status 404 returned error can't find the container with id 979a782c0ab8eb22453c0f9165d84f012c0c5eef669594b4ed72a20928b8ba59 Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.581012 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.719901 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-dns-svc\") pod \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.719968 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-config\") pod \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.720024 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghmk6\" (UniqueName: \"kubernetes.io/projected/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-kube-api-access-ghmk6\") pod \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\" (UID: \"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940\") " Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.726514 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-kube-api-access-ghmk6" (OuterVolumeSpecName: "kube-api-access-ghmk6") pod "e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" (UID: "e533cd9f-f8ed-4d7c-9f3b-17e2e460c940"). InnerVolumeSpecName "kube-api-access-ghmk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.760468 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-config" (OuterVolumeSpecName: "config") pod "e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" (UID: "e533cd9f-f8ed-4d7c-9f3b-17e2e460c940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.763849 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" (UID: "e533cd9f-f8ed-4d7c-9f3b-17e2e460c940"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.764673 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 26 13:42:59 crc kubenswrapper[4695]: E1126 13:42:59.765027 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerName="dnsmasq-dns" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.765045 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerName="dnsmasq-dns" Nov 26 13:42:59 crc kubenswrapper[4695]: E1126 13:42:59.765067 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerName="init" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.765075 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerName="init" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.765224 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerName="dnsmasq-dns" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.769987 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.772180 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-txsc2" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.773134 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.773058 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.773495 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.780628 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.822652 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghmk6\" (UniqueName: \"kubernetes.io/projected/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-kube-api-access-ghmk6\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.822804 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.822899 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.924687 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.924936 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.925076 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmcm2\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-kube-api-access-dmcm2\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.925159 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b77b4e90-5d1a-4724-a57f-2ff4a394d434-cache\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:42:59 crc kubenswrapper[4695]: I1126 13:42:59.925244 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b77b4e90-5d1a-4724-a57f-2ff4a394d434-lock\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.027138 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmcm2\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-kube-api-access-dmcm2\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.027215 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b77b4e90-5d1a-4724-a57f-2ff4a394d434-cache\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.027255 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b77b4e90-5d1a-4724-a57f-2ff4a394d434-lock\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.027478 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.027539 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.028092 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.028142 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b77b4e90-5d1a-4724-a57f-2ff4a394d434-lock\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.028507 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b77b4e90-5d1a-4724-a57f-2ff4a394d434-cache\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.028681 4695 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.028707 4695 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.028761 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift podName:b77b4e90-5d1a-4724-a57f-2ff4a394d434 nodeName:}" failed. No retries permitted until 2025-11-26 13:43:00.528742077 +0000 UTC m=+1164.164567159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift") pod "swift-storage-0" (UID: "b77b4e90-5d1a-4724-a57f-2ff4a394d434") : configmap "swift-ring-files" not found Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.046390 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmcm2\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-kube-api-access-dmcm2\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.052120 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.200112 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.204525 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"82b6b21a-6ed0-43d7-9763-684eca59aa29","Type":"ContainerStarted","Data":"e9aee3ce473a472e65198073c9aee8bf1909398dd4b1993baa92e9f97a7ce814"} Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.206205 4695 generic.go:334] "Generic (PLEG): container finished" podID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerID="75886e431c7eba29b4a692212e3cd0ad82fd0525487049a9b5c4ad186a49655f" exitCode=0 Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.206302 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" event={"ID":"6722cf2e-4c5c-4fd2-a307-84855131e0c2","Type":"ContainerDied","Data":"75886e431c7eba29b4a692212e3cd0ad82fd0525487049a9b5c4ad186a49655f"} Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.206565 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" event={"ID":"6722cf2e-4c5c-4fd2-a307-84855131e0c2","Type":"ContainerStarted","Data":"979a782c0ab8eb22453c0f9165d84f012c0c5eef669594b4ed72a20928b8ba59"} Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.217260 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b02f07d7-7406-4602-b166-911408fe8bf0","Type":"ContainerStarted","Data":"bba718f84ce4a66b5d05bb6c25a5955bc7fb02265f887d6bbb8ff8ec532e20f2"} Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.236839 4695 generic.go:334] "Generic (PLEG): container finished" podID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" containerID="a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8" exitCode=0 Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.237032 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" event={"ID":"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940","Type":"ContainerDied","Data":"a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8"} Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.237076 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.237091 4695 scope.go:117] "RemoveContainer" containerID="a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.237076 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-zz9gr" event={"ID":"e533cd9f-f8ed-4d7c-9f3b-17e2e460c940","Type":"ContainerDied","Data":"10bd9d1fd16c3cb53480fa3ab48a184795d0891a831740a794f0749892577dbd"} Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.239362 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.433071969 podStartE2EDuration="37.239331163s" podCreationTimestamp="2025-11-26 13:42:23 +0000 UTC" firstStartedPulling="2025-11-26 13:42:46.675977482 +0000 UTC m=+1150.311802564" lastFinishedPulling="2025-11-26 13:42:53.482236676 +0000 UTC m=+1157.118061758" observedRunningTime="2025-11-26 13:43:00.22228048 +0000 UTC m=+1163.858105572" watchObservedRunningTime="2025-11-26 13:43:00.239331163 +0000 UTC m=+1163.875156245" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.241722 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.263150 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.287453 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.736585796 podStartE2EDuration="35.287323432s" podCreationTimestamp="2025-11-26 13:42:25 +0000 UTC" firstStartedPulling="2025-11-26 13:42:46.675947901 +0000 UTC m=+1150.311772983" lastFinishedPulling="2025-11-26 13:42:53.226685537 +0000 UTC m=+1156.862510619" observedRunningTime="2025-11-26 13:43:00.27565698 +0000 UTC m=+1163.911482062" watchObservedRunningTime="2025-11-26 13:43:00.287323432 +0000 UTC m=+1163.923148524" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.307048 4695 scope.go:117] "RemoveContainer" containerID="03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.346277 4695 scope.go:117] "RemoveContainer" containerID="a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8" Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.346599 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8\": container with ID starting with a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8 not found: ID does not exist" containerID="a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.346629 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8"} err="failed to get container status \"a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8\": rpc error: code = NotFound desc = could not find container \"a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8\": container with ID starting with a37b96aa275c8fc3dffe4c01596d930f1ef21a05803f7fba565becef691941c8 not found: ID does not exist" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.346655 4695 scope.go:117] "RemoveContainer" containerID="03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a" Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.347027 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a\": container with ID starting with 03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a not found: ID does not exist" containerID="03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.347047 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a"} err="failed to get container status \"03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a\": rpc error: code = NotFound desc = could not find container \"03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a\": container with ID starting with 03aae7cc155eda7a7b42a019c1fcee946d45191e836da6d3fc5d48da6788bf3a not found: ID does not exist" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.350378 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zz9gr"] Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.358704 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-zz9gr"] Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.462377 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.498402 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 26 13:43:00 crc kubenswrapper[4695]: I1126 13:43:00.546558 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.547022 4695 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.547095 4695 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:43:00 crc kubenswrapper[4695]: E1126 13:43:00.547206 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift podName:b77b4e90-5d1a-4724-a57f-2ff4a394d434 nodeName:}" failed. No retries permitted until 2025-11-26 13:43:01.547188257 +0000 UTC m=+1165.183013329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift") pod "swift-storage-0" (UID: "b77b4e90-5d1a-4724-a57f-2ff4a394d434") : configmap "swift-ring-files" not found Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.178625 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e533cd9f-f8ed-4d7c-9f3b-17e2e460c940" path="/var/lib/kubelet/pods/e533cd9f-f8ed-4d7c-9f3b-17e2e460c940/volumes" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.250945 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" event={"ID":"6722cf2e-4c5c-4fd2-a307-84855131e0c2","Type":"ContainerStarted","Data":"1cd34b58ae75e786f1c7bb257505a963a45cc90087110ceb8b25c7289f8c253a"} Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.251023 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.252856 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.253126 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.272105 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" podStartSLOduration=3.272082302 podStartE2EDuration="3.272082302s" podCreationTimestamp="2025-11-26 13:42:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:01.268750656 +0000 UTC m=+1164.904575738" watchObservedRunningTime="2025-11-26 13:43:01.272082302 +0000 UTC m=+1164.907907384" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.294027 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.295465 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.549481 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-bp5wg"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.564909 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:01 crc kubenswrapper[4695]: E1126 13:43:01.565065 4695 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:43:01 crc kubenswrapper[4695]: E1126 13:43:01.565081 4695 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:43:01 crc kubenswrapper[4695]: E1126 13:43:01.565129 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift podName:b77b4e90-5d1a-4724-a57f-2ff4a394d434 nodeName:}" failed. No retries permitted until 2025-11-26 13:43:03.565112074 +0000 UTC m=+1167.200937156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift") pod "swift-storage-0" (UID: "b77b4e90-5d1a-4724-a57f-2ff4a394d434") : configmap "swift-ring-files" not found Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.583191 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-88qhm"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.584877 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.589497 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.615385 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-88qhm"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.626654 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5cm6x"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.627674 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.630191 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.644904 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5cm6x"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.667564 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.667724 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.667798 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-config\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.667842 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4b2s\" (UniqueName: \"kubernetes.io/projected/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-kube-api-access-t4b2s\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.716309 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-88qhm"] Nov 26 13:43:01 crc kubenswrapper[4695]: E1126 13:43:01.729580 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[config dns-svc kube-api-access-t4b2s ovsdbserver-sb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" podUID="87df5b74-ef1e-4fde-a3f9-4ec8304020b4" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.746492 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-nqfxz"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.749400 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.752484 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.770701 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-nqfxz"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771543 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22e7a6af-7195-45fd-979b-4af39f3cfb62-ovs-rundir\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771609 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e7a6af-7195-45fd-979b-4af39f3cfb62-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771639 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771675 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fhqh\" (UniqueName: \"kubernetes.io/projected/22e7a6af-7195-45fd-979b-4af39f3cfb62-kube-api-access-7fhqh\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771710 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22e7a6af-7195-45fd-979b-4af39f3cfb62-ovn-rundir\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771732 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771757 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-config\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771780 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4b2s\" (UniqueName: \"kubernetes.io/projected/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-kube-api-access-t4b2s\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771821 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7a6af-7195-45fd-979b-4af39f3cfb62-config\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.771849 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e7a6af-7195-45fd-979b-4af39f3cfb62-combined-ca-bundle\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.772743 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-dns-svc\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.775039 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-config\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.777593 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-ovsdbserver-sb\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.781783 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.783544 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.786545 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.787070 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.787297 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.789597 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-mhdsc" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.800314 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4b2s\" (UniqueName: \"kubernetes.io/projected/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-kube-api-access-t4b2s\") pod \"dnsmasq-dns-8cc7fc4dc-88qhm\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.809534 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.872831 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7a6af-7195-45fd-979b-4af39f3cfb62-config\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.872901 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.872926 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.872943 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e7a6af-7195-45fd-979b-4af39f3cfb62-combined-ca-bundle\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873472 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873499 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22e7a6af-7195-45fd-979b-4af39f3cfb62-config\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873507 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjn4\" (UniqueName: \"kubernetes.io/projected/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-kube-api-access-4kjn4\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873554 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2pb4\" (UniqueName: \"kubernetes.io/projected/26637d33-5a10-4201-b728-2a250279651b-kube-api-access-j2pb4\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873584 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22e7a6af-7195-45fd-979b-4af39f3cfb62-ovs-rundir\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873600 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26637d33-5a10-4201-b728-2a250279651b-scripts\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873613 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26637d33-5a10-4201-b728-2a250279651b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873638 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873690 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26637d33-5a10-4201-b728-2a250279651b-config\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873707 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e7a6af-7195-45fd-979b-4af39f3cfb62-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873726 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-config\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873752 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873760 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/22e7a6af-7195-45fd-979b-4af39f3cfb62-ovs-rundir\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873772 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fhqh\" (UniqueName: \"kubernetes.io/projected/22e7a6af-7195-45fd-979b-4af39f3cfb62-kube-api-access-7fhqh\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873805 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873830 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22e7a6af-7195-45fd-979b-4af39f3cfb62-ovn-rundir\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.873929 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/22e7a6af-7195-45fd-979b-4af39f3cfb62-ovn-rundir\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.876302 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22e7a6af-7195-45fd-979b-4af39f3cfb62-combined-ca-bundle\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.877025 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22e7a6af-7195-45fd-979b-4af39f3cfb62-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.889114 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fhqh\" (UniqueName: \"kubernetes.io/projected/22e7a6af-7195-45fd-979b-4af39f3cfb62-kube-api-access-7fhqh\") pod \"ovn-controller-metrics-5cm6x\" (UID: \"22e7a6af-7195-45fd-979b-4af39f3cfb62\") " pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.952131 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5cm6x" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.977956 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978236 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978274 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978311 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kjn4\" (UniqueName: \"kubernetes.io/projected/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-kube-api-access-4kjn4\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978342 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2pb4\" (UniqueName: \"kubernetes.io/projected/26637d33-5a10-4201-b728-2a250279651b-kube-api-access-j2pb4\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978380 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26637d33-5a10-4201-b728-2a250279651b-scripts\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978398 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26637d33-5a10-4201-b728-2a250279651b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978429 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978467 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26637d33-5a10-4201-b728-2a250279651b-config\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978491 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-config\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978524 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.978561 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.979068 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.979515 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.979859 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/26637d33-5a10-4201-b728-2a250279651b-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.979956 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/26637d33-5a10-4201-b728-2a250279651b-scripts\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.980650 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.981761 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-config\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.981912 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26637d33-5a10-4201-b728-2a250279651b-config\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.984371 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.985511 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.986172 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26637d33-5a10-4201-b728-2a250279651b-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:01 crc kubenswrapper[4695]: I1126 13:43:01.999035 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kjn4\" (UniqueName: \"kubernetes.io/projected/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-kube-api-access-4kjn4\") pod \"dnsmasq-dns-b8fbc5445-nqfxz\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.000639 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2pb4\" (UniqueName: \"kubernetes.io/projected/26637d33-5a10-4201-b728-2a250279651b-kube-api-access-j2pb4\") pod \"ovn-northd-0\" (UID: \"26637d33-5a10-4201-b728-2a250279651b\") " pod="openstack/ovn-northd-0" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.077835 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.152963 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.264816 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.284527 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.385940 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-ovsdbserver-sb\") pod \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.386100 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4b2s\" (UniqueName: \"kubernetes.io/projected/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-kube-api-access-t4b2s\") pod \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.386140 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-config\") pod \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.386179 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-dns-svc\") pod \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\" (UID: \"87df5b74-ef1e-4fde-a3f9-4ec8304020b4\") " Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.387281 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "87df5b74-ef1e-4fde-a3f9-4ec8304020b4" (UID: "87df5b74-ef1e-4fde-a3f9-4ec8304020b4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.388370 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-config" (OuterVolumeSpecName: "config") pod "87df5b74-ef1e-4fde-a3f9-4ec8304020b4" (UID: "87df5b74-ef1e-4fde-a3f9-4ec8304020b4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.388662 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "87df5b74-ef1e-4fde-a3f9-4ec8304020b4" (UID: "87df5b74-ef1e-4fde-a3f9-4ec8304020b4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.392813 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-kube-api-access-t4b2s" (OuterVolumeSpecName: "kube-api-access-t4b2s") pod "87df5b74-ef1e-4fde-a3f9-4ec8304020b4" (UID: "87df5b74-ef1e-4fde-a3f9-4ec8304020b4"). InnerVolumeSpecName "kube-api-access-t4b2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.411828 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5cm6x"] Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.488167 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4b2s\" (UniqueName: \"kubernetes.io/projected/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-kube-api-access-t4b2s\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.488208 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.488219 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.488231 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/87df5b74-ef1e-4fde-a3f9-4ec8304020b4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.532233 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-nqfxz"] Nov 26 13:43:02 crc kubenswrapper[4695]: I1126 13:43:02.658443 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 26 13:43:02 crc kubenswrapper[4695]: W1126 13:43:02.666941 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26637d33_5a10_4201_b728_2a250279651b.slice/crio-0a92007867ee3bd3b734fb5c3da94412975b94fae3a3e3a9b79f212de52df4aa WatchSource:0}: Error finding container 0a92007867ee3bd3b734fb5c3da94412975b94fae3a3e3a9b79f212de52df4aa: Status 404 returned error can't find the container with id 0a92007867ee3bd3b734fb5c3da94412975b94fae3a3e3a9b79f212de52df4aa Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.272597 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26637d33-5a10-4201-b728-2a250279651b","Type":"ContainerStarted","Data":"0a92007867ee3bd3b734fb5c3da94412975b94fae3a3e3a9b79f212de52df4aa"} Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.274281 4695 generic.go:334] "Generic (PLEG): container finished" podID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerID="ba9f980452abca66cb826d0c4f166d7eacd3f607723da26468f27cf111921afe" exitCode=0 Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.274367 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" event={"ID":"6fb1a7cc-9253-4702-b8ff-9b2daa077c96","Type":"ContainerDied","Data":"ba9f980452abca66cb826d0c4f166d7eacd3f607723da26468f27cf111921afe"} Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.275464 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" event={"ID":"6fb1a7cc-9253-4702-b8ff-9b2daa077c96","Type":"ContainerStarted","Data":"b039d5fd7b0a39ab2c22e8bfc77786f8264f012154a51b0e49e63751d38bcd35"} Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.275759 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5cm6x" event={"ID":"22e7a6af-7195-45fd-979b-4af39f3cfb62","Type":"ContainerStarted","Data":"8e2b4e5aa9fbb5e5cd682567b4cfee97f0988bf81fe272fa378e71f3727c1dec"} Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.275795 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5cm6x" event={"ID":"22e7a6af-7195-45fd-979b-4af39f3cfb62","Type":"ContainerStarted","Data":"f40f62346012c2d7e3b494aa4dffa35eaed0053eb86cb7c6ea761adf3a2133d5"} Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.275860 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8cc7fc4dc-88qhm" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.276134 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" podUID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerName="dnsmasq-dns" containerID="cri-o://1cd34b58ae75e786f1c7bb257505a963a45cc90087110ceb8b25c7289f8c253a" gracePeriod=10 Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.327597 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5cm6x" podStartSLOduration=2.327576462 podStartE2EDuration="2.327576462s" podCreationTimestamp="2025-11-26 13:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:03.319183865 +0000 UTC m=+1166.955008957" watchObservedRunningTime="2025-11-26 13:43:03.327576462 +0000 UTC m=+1166.963401544" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.387925 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-88qhm"] Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.398196 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8cc7fc4dc-88qhm"] Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.608444 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:03 crc kubenswrapper[4695]: E1126 13:43:03.609096 4695 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:43:03 crc kubenswrapper[4695]: E1126 13:43:03.609117 4695 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:43:03 crc kubenswrapper[4695]: E1126 13:43:03.609172 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift podName:b77b4e90-5d1a-4724-a57f-2ff4a394d434 nodeName:}" failed. No retries permitted until 2025-11-26 13:43:07.609151339 +0000 UTC m=+1171.244976431 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift") pod "swift-storage-0" (UID: "b77b4e90-5d1a-4724-a57f-2ff4a394d434") : configmap "swift-ring-files" not found Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.747913 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-n2464"] Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.752828 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.761261 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.761319 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.761328 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n2464"] Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.764422 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.811589 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-dispersionconf\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.811888 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-ring-data-devices\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.811980 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-etc-swift\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.812038 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-scripts\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.812181 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67f75\" (UniqueName: \"kubernetes.io/projected/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-kube-api-access-67f75\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.812602 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-swiftconf\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.812719 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-combined-ca-bundle\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.914687 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67f75\" (UniqueName: \"kubernetes.io/projected/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-kube-api-access-67f75\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.914757 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-swiftconf\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.914796 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-combined-ca-bundle\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.914854 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-dispersionconf\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.914878 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-ring-data-devices\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.914921 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-etc-swift\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.914949 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-scripts\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.915765 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-scripts\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.915840 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-etc-swift\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.916130 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-ring-data-devices\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.923944 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-swiftconf\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.932384 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-dispersionconf\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.935160 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-combined-ca-bundle\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:03 crc kubenswrapper[4695]: I1126 13:43:03.936562 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67f75\" (UniqueName: \"kubernetes.io/projected/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-kube-api-access-67f75\") pod \"swift-ring-rebalance-n2464\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.097989 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.284891 4695 generic.go:334] "Generic (PLEG): container finished" podID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerID="1cd34b58ae75e786f1c7bb257505a963a45cc90087110ceb8b25c7289f8c253a" exitCode=0 Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.284976 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" event={"ID":"6722cf2e-4c5c-4fd2-a307-84855131e0c2","Type":"ContainerDied","Data":"1cd34b58ae75e786f1c7bb257505a963a45cc90087110ceb8b25c7289f8c253a"} Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.288378 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" event={"ID":"6fb1a7cc-9253-4702-b8ff-9b2daa077c96","Type":"ContainerStarted","Data":"1ec3a3ed277498c6dad65878a193b671b5cbb0133ab9f6a452bf705ce487f768"} Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.311716 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" podStartSLOduration=3.311693972 podStartE2EDuration="3.311693972s" podCreationTimestamp="2025-11-26 13:43:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:04.305380112 +0000 UTC m=+1167.941205214" watchObservedRunningTime="2025-11-26 13:43:04.311693972 +0000 UTC m=+1167.947519064" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.556965 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.642196 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qp5l7\" (UniqueName: \"kubernetes.io/projected/6722cf2e-4c5c-4fd2-a307-84855131e0c2-kube-api-access-qp5l7\") pod \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.642297 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-dns-svc\") pod \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.642392 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-config\") pod \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\" (UID: \"6722cf2e-4c5c-4fd2-a307-84855131e0c2\") " Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.666187 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6722cf2e-4c5c-4fd2-a307-84855131e0c2-kube-api-access-qp5l7" (OuterVolumeSpecName: "kube-api-access-qp5l7") pod "6722cf2e-4c5c-4fd2-a307-84855131e0c2" (UID: "6722cf2e-4c5c-4fd2-a307-84855131e0c2"). InnerVolumeSpecName "kube-api-access-qp5l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.711686 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6722cf2e-4c5c-4fd2-a307-84855131e0c2" (UID: "6722cf2e-4c5c-4fd2-a307-84855131e0c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.718450 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-config" (OuterVolumeSpecName: "config") pod "6722cf2e-4c5c-4fd2-a307-84855131e0c2" (UID: "6722cf2e-4c5c-4fd2-a307-84855131e0c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.744629 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.744662 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6722cf2e-4c5c-4fd2-a307-84855131e0c2-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.744676 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qp5l7\" (UniqueName: \"kubernetes.io/projected/6722cf2e-4c5c-4fd2-a307-84855131e0c2-kube-api-access-qp5l7\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:04 crc kubenswrapper[4695]: I1126 13:43:04.975125 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-n2464"] Nov 26 13:43:04 crc kubenswrapper[4695]: W1126 13:43:04.982945 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc139d50_e9df_4c3c_9ae0_cf5a4c7f0205.slice/crio-b11f0e4166c4eb2cdf5bb59b17a200c40bf369ec94a31a87dd9460bd4460ea85 WatchSource:0}: Error finding container b11f0e4166c4eb2cdf5bb59b17a200c40bf369ec94a31a87dd9460bd4460ea85: Status 404 returned error can't find the container with id b11f0e4166c4eb2cdf5bb59b17a200c40bf369ec94a31a87dd9460bd4460ea85 Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.172921 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87df5b74-ef1e-4fde-a3f9-4ec8304020b4" path="/var/lib/kubelet/pods/87df5b74-ef1e-4fde-a3f9-4ec8304020b4/volumes" Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.253697 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.255293 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.298174 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n2464" event={"ID":"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205","Type":"ContainerStarted","Data":"b11f0e4166c4eb2cdf5bb59b17a200c40bf369ec94a31a87dd9460bd4460ea85"} Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.301944 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" event={"ID":"6722cf2e-4c5c-4fd2-a307-84855131e0c2","Type":"ContainerDied","Data":"979a782c0ab8eb22453c0f9165d84f012c0c5eef669594b4ed72a20928b8ba59"} Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.301998 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-bp5wg" Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.302042 4695 scope.go:117] "RemoveContainer" containerID="1cd34b58ae75e786f1c7bb257505a963a45cc90087110ceb8b25c7289f8c253a" Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.302320 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.329012 4695 scope.go:117] "RemoveContainer" containerID="75886e431c7eba29b4a692212e3cd0ad82fd0525487049a9b5c4ad186a49655f" Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.331249 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-bp5wg"] Nov 26 13:43:05 crc kubenswrapper[4695]: I1126 13:43:05.339308 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-bp5wg"] Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.397283 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.398561 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.398667 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.399402 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d5ada8ee218c5b0e5eb69bc5a99a08479c0f37839560e96a3518e7444c465fbd"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.399531 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://d5ada8ee218c5b0e5eb69bc5a99a08479c0f37839560e96a3518e7444c465fbd" gracePeriod=600 Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.625088 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.625180 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 26 13:43:06 crc kubenswrapper[4695]: I1126 13:43:06.715724 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 26 13:43:07 crc kubenswrapper[4695]: I1126 13:43:07.179961 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" path="/var/lib/kubelet/pods/6722cf2e-4c5c-4fd2-a307-84855131e0c2/volumes" Nov 26 13:43:07 crc kubenswrapper[4695]: I1126 13:43:07.402801 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 26 13:43:07 crc kubenswrapper[4695]: I1126 13:43:07.696762 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:07 crc kubenswrapper[4695]: E1126 13:43:07.696985 4695 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:43:07 crc kubenswrapper[4695]: E1126 13:43:07.697013 4695 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:43:07 crc kubenswrapper[4695]: E1126 13:43:07.697082 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift podName:b77b4e90-5d1a-4724-a57f-2ff4a394d434 nodeName:}" failed. No retries permitted until 2025-11-26 13:43:15.697062224 +0000 UTC m=+1179.332887306 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift") pod "swift-storage-0" (UID: "b77b4e90-5d1a-4724-a57f-2ff4a394d434") : configmap "swift-ring-files" not found Nov 26 13:43:08 crc kubenswrapper[4695]: I1126 13:43:08.330474 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26637d33-5a10-4201-b728-2a250279651b","Type":"ContainerStarted","Data":"3ac043bd4a0d3f75b35801949ab3c1c767d555463dbd631539182a6ad4ecd34a"} Nov 26 13:43:08 crc kubenswrapper[4695]: I1126 13:43:08.334943 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="d5ada8ee218c5b0e5eb69bc5a99a08479c0f37839560e96a3518e7444c465fbd" exitCode=0 Nov 26 13:43:08 crc kubenswrapper[4695]: I1126 13:43:08.335023 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"d5ada8ee218c5b0e5eb69bc5a99a08479c0f37839560e96a3518e7444c465fbd"} Nov 26 13:43:08 crc kubenswrapper[4695]: I1126 13:43:08.335070 4695 scope.go:117] "RemoveContainer" containerID="5db1765d388a4f8bc2fce4e005f968080dbee8df86cf9973d4a9582128ebd4df" Nov 26 13:43:08 crc kubenswrapper[4695]: I1126 13:43:08.587637 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 26 13:43:09 crc kubenswrapper[4695]: I1126 13:43:09.358585 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"26637d33-5a10-4201-b728-2a250279651b","Type":"ContainerStarted","Data":"0635a6ae2ec73a0e84b273aa80d0c9a9ff69962033917f036cfd23652ef569e4"} Nov 26 13:43:10 crc kubenswrapper[4695]: I1126 13:43:10.377770 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"d704a070a53ff2c48f0d0e2fcd3340ab686270f91e63ed80bdf2afa4d7bc31e8"} Nov 26 13:43:10 crc kubenswrapper[4695]: I1126 13:43:10.378241 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 26 13:43:10 crc kubenswrapper[4695]: I1126 13:43:10.423462 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=6.060201662 podStartE2EDuration="9.423441788s" podCreationTimestamp="2025-11-26 13:43:01 +0000 UTC" firstStartedPulling="2025-11-26 13:43:02.669897848 +0000 UTC m=+1166.305722930" lastFinishedPulling="2025-11-26 13:43:06.033137964 +0000 UTC m=+1169.668963056" observedRunningTime="2025-11-26 13:43:10.401892002 +0000 UTC m=+1174.037717104" watchObservedRunningTime="2025-11-26 13:43:10.423441788 +0000 UTC m=+1174.059266870" Nov 26 13:43:12 crc kubenswrapper[4695]: I1126 13:43:12.081741 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:43:12 crc kubenswrapper[4695]: I1126 13:43:12.151373 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mlxwb"] Nov 26 13:43:12 crc kubenswrapper[4695]: I1126 13:43:12.151638 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerName="dnsmasq-dns" containerID="cri-o://6a04c893ab0c01a14032c17f29695bc84b7d2b76406c4ec9dfe4e4c694795ba0" gracePeriod=10 Nov 26 13:43:12 crc kubenswrapper[4695]: I1126 13:43:12.404656 4695 generic.go:334] "Generic (PLEG): container finished" podID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerID="6a04c893ab0c01a14032c17f29695bc84b7d2b76406c4ec9dfe4e4c694795ba0" exitCode=0 Nov 26 13:43:12 crc kubenswrapper[4695]: I1126 13:43:12.404898 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" event={"ID":"4ce3ac6a-a69e-4066-a452-043e99787c41","Type":"ContainerDied","Data":"6a04c893ab0c01a14032c17f29695bc84b7d2b76406c4ec9dfe4e4c694795ba0"} Nov 26 13:43:12 crc kubenswrapper[4695]: I1126 13:43:12.793543 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.97:5353: connect: connection refused" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.392747 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.463875 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.619371 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.712737 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-dns-svc\") pod \"4ce3ac6a-a69e-4066-a452-043e99787c41\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.712823 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qf8c\" (UniqueName: \"kubernetes.io/projected/4ce3ac6a-a69e-4066-a452-043e99787c41-kube-api-access-8qf8c\") pod \"4ce3ac6a-a69e-4066-a452-043e99787c41\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.712855 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-config\") pod \"4ce3ac6a-a69e-4066-a452-043e99787c41\" (UID: \"4ce3ac6a-a69e-4066-a452-043e99787c41\") " Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.720372 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ce3ac6a-a69e-4066-a452-043e99787c41-kube-api-access-8qf8c" (OuterVolumeSpecName: "kube-api-access-8qf8c") pod "4ce3ac6a-a69e-4066-a452-043e99787c41" (UID: "4ce3ac6a-a69e-4066-a452-043e99787c41"). InnerVolumeSpecName "kube-api-access-8qf8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.757294 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4ce3ac6a-a69e-4066-a452-043e99787c41" (UID: "4ce3ac6a-a69e-4066-a452-043e99787c41"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.757791 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-config" (OuterVolumeSpecName: "config") pod "4ce3ac6a-a69e-4066-a452-043e99787c41" (UID: "4ce3ac6a-a69e-4066-a452-043e99787c41"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.815157 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qf8c\" (UniqueName: \"kubernetes.io/projected/4ce3ac6a-a69e-4066-a452-043e99787c41-kube-api-access-8qf8c\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.815188 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:13 crc kubenswrapper[4695]: I1126 13:43:13.815197 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4ce3ac6a-a69e-4066-a452-043e99787c41-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.428637 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n2464" event={"ID":"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205","Type":"ContainerStarted","Data":"3c5a372ca5c4a908b8f2cf1254b9c671b7c0fc907e205262a541b5292c992d36"} Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.431570 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" event={"ID":"4ce3ac6a-a69e-4066-a452-043e99787c41","Type":"ContainerDied","Data":"0097f4012592a910fb2d1a90c463ec6e9bb19a44b591057fa7515e490a9eb1d0"} Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.431643 4695 scope.go:117] "RemoveContainer" containerID="6a04c893ab0c01a14032c17f29695bc84b7d2b76406c4ec9dfe4e4c694795ba0" Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.431664 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-mlxwb" Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.453370 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-n2464" podStartSLOduration=2.949180021 podStartE2EDuration="11.453322185s" podCreationTimestamp="2025-11-26 13:43:03 +0000 UTC" firstStartedPulling="2025-11-26 13:43:04.985026996 +0000 UTC m=+1168.620852098" lastFinishedPulling="2025-11-26 13:43:13.48916918 +0000 UTC m=+1177.124994262" observedRunningTime="2025-11-26 13:43:14.447816109 +0000 UTC m=+1178.083641221" watchObservedRunningTime="2025-11-26 13:43:14.453322185 +0000 UTC m=+1178.089147307" Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.461629 4695 scope.go:117] "RemoveContainer" containerID="8d24d52db01665fd6c5a9faf9faa8e5d48c20218905234412d73a0d33e38e258" Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.489866 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mlxwb"] Nov 26 13:43:14 crc kubenswrapper[4695]: I1126 13:43:14.499087 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-mlxwb"] Nov 26 13:43:15 crc kubenswrapper[4695]: I1126 13:43:15.174364 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" path="/var/lib/kubelet/pods/4ce3ac6a-a69e-4066-a452-043e99787c41/volumes" Nov 26 13:43:15 crc kubenswrapper[4695]: I1126 13:43:15.743043 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:15 crc kubenswrapper[4695]: E1126 13:43:15.743285 4695 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 26 13:43:15 crc kubenswrapper[4695]: E1126 13:43:15.743322 4695 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 26 13:43:15 crc kubenswrapper[4695]: E1126 13:43:15.743408 4695 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift podName:b77b4e90-5d1a-4724-a57f-2ff4a394d434 nodeName:}" failed. No retries permitted until 2025-11-26 13:43:31.743385538 +0000 UTC m=+1195.379210620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift") pod "swift-storage-0" (UID: "b77b4e90-5d1a-4724-a57f-2ff4a394d434") : configmap "swift-ring-files" not found Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.460776 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b669-account-create-update-sksw2"] Nov 26 13:43:16 crc kubenswrapper[4695]: E1126 13:43:16.461173 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerName="dnsmasq-dns" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.461189 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerName="dnsmasq-dns" Nov 26 13:43:16 crc kubenswrapper[4695]: E1126 13:43:16.461202 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerName="init" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.461212 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerName="init" Nov 26 13:43:16 crc kubenswrapper[4695]: E1126 13:43:16.461247 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerName="dnsmasq-dns" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.461256 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerName="dnsmasq-dns" Nov 26 13:43:16 crc kubenswrapper[4695]: E1126 13:43:16.461283 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerName="init" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.461290 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerName="init" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.461541 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ce3ac6a-a69e-4066-a452-043e99787c41" containerName="dnsmasq-dns" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.461567 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6722cf2e-4c5c-4fd2-a307-84855131e0c2" containerName="dnsmasq-dns" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.462242 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.465412 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.471534 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b669-account-create-update-sksw2"] Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.528912 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-nbvwd"] Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.530150 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.536759 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nbvwd"] Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.557280 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8442l\" (UniqueName: \"kubernetes.io/projected/889f9407-9537-4078-91f9-01e10810dd66-kube-api-access-8442l\") pod \"keystone-b669-account-create-update-sksw2\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.557379 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889f9407-9537-4078-91f9-01e10810dd66-operator-scripts\") pod \"keystone-b669-account-create-update-sksw2\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.557513 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tknxx\" (UniqueName: \"kubernetes.io/projected/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-kube-api-access-tknxx\") pod \"keystone-db-create-nbvwd\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.557820 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-operator-scripts\") pod \"keystone-db-create-nbvwd\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.659395 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-operator-scripts\") pod \"keystone-db-create-nbvwd\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.659498 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8442l\" (UniqueName: \"kubernetes.io/projected/889f9407-9537-4078-91f9-01e10810dd66-kube-api-access-8442l\") pod \"keystone-b669-account-create-update-sksw2\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.659567 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889f9407-9537-4078-91f9-01e10810dd66-operator-scripts\") pod \"keystone-b669-account-create-update-sksw2\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.659607 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tknxx\" (UniqueName: \"kubernetes.io/projected/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-kube-api-access-tknxx\") pod \"keystone-db-create-nbvwd\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.660283 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-operator-scripts\") pod \"keystone-db-create-nbvwd\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.660826 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889f9407-9537-4078-91f9-01e10810dd66-operator-scripts\") pod \"keystone-b669-account-create-update-sksw2\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.685028 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8442l\" (UniqueName: \"kubernetes.io/projected/889f9407-9537-4078-91f9-01e10810dd66-kube-api-access-8442l\") pod \"keystone-b669-account-create-update-sksw2\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.685283 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tknxx\" (UniqueName: \"kubernetes.io/projected/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-kube-api-access-tknxx\") pod \"keystone-db-create-nbvwd\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.726134 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dm6ln"] Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.727631 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.743312 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dm6ln"] Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.762454 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3c192f-2601-41cc-a59f-716af5cffc4c-operator-scripts\") pod \"placement-db-create-dm6ln\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.762544 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nl5p\" (UniqueName: \"kubernetes.io/projected/ce3c192f-2601-41cc-a59f-716af5cffc4c-kube-api-access-8nl5p\") pod \"placement-db-create-dm6ln\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.810947 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.839886 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3897-account-create-update-dvm5t"] Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.841239 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.845391 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.847291 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.854006 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3897-account-create-update-dvm5t"] Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.863781 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-operator-scripts\") pod \"placement-3897-account-create-update-dvm5t\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.863868 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3c192f-2601-41cc-a59f-716af5cffc4c-operator-scripts\") pod \"placement-db-create-dm6ln\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.863925 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nl5p\" (UniqueName: \"kubernetes.io/projected/ce3c192f-2601-41cc-a59f-716af5cffc4c-kube-api-access-8nl5p\") pod \"placement-db-create-dm6ln\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.863976 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79n52\" (UniqueName: \"kubernetes.io/projected/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-kube-api-access-79n52\") pod \"placement-3897-account-create-update-dvm5t\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.864872 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3c192f-2601-41cc-a59f-716af5cffc4c-operator-scripts\") pod \"placement-db-create-dm6ln\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.896855 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nl5p\" (UniqueName: \"kubernetes.io/projected/ce3c192f-2601-41cc-a59f-716af5cffc4c-kube-api-access-8nl5p\") pod \"placement-db-create-dm6ln\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.965846 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79n52\" (UniqueName: \"kubernetes.io/projected/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-kube-api-access-79n52\") pod \"placement-3897-account-create-update-dvm5t\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.965930 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-operator-scripts\") pod \"placement-3897-account-create-update-dvm5t\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.966560 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-operator-scripts\") pod \"placement-3897-account-create-update-dvm5t\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:16 crc kubenswrapper[4695]: I1126 13:43:16.994108 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79n52\" (UniqueName: \"kubernetes.io/projected/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-kube-api-access-79n52\") pod \"placement-3897-account-create-update-dvm5t\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.076532 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-89jz2"] Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.079108 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.086878 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.090940 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-89jz2"] Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.175329 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-operator-scripts\") pod \"glance-db-create-89jz2\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.175402 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvsqr\" (UniqueName: \"kubernetes.io/projected/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-kube-api-access-gvsqr\") pod \"glance-db-create-89jz2\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.222544 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b232-account-create-update-rlm2c"] Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.223663 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b232-account-create-update-rlm2c"] Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.223743 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.229810 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.252203 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.265457 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.288376 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-operator-scripts\") pod \"glance-db-create-89jz2\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.289528 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr445\" (UniqueName: \"kubernetes.io/projected/ba02b5bd-787d-4939-bfd7-5875a25173dc-kube-api-access-vr445\") pod \"glance-b232-account-create-update-rlm2c\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.289649 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba02b5bd-787d-4939-bfd7-5875a25173dc-operator-scripts\") pod \"glance-b232-account-create-update-rlm2c\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.289743 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvsqr\" (UniqueName: \"kubernetes.io/projected/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-kube-api-access-gvsqr\") pod \"glance-db-create-89jz2\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.290371 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-operator-scripts\") pod \"glance-db-create-89jz2\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.317109 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvsqr\" (UniqueName: \"kubernetes.io/projected/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-kube-api-access-gvsqr\") pod \"glance-db-create-89jz2\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.391312 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr445\" (UniqueName: \"kubernetes.io/projected/ba02b5bd-787d-4939-bfd7-5875a25173dc-kube-api-access-vr445\") pod \"glance-b232-account-create-update-rlm2c\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.391372 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba02b5bd-787d-4939-bfd7-5875a25173dc-operator-scripts\") pod \"glance-b232-account-create-update-rlm2c\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.392063 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba02b5bd-787d-4939-bfd7-5875a25173dc-operator-scripts\") pod \"glance-b232-account-create-update-rlm2c\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.402175 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-89jz2" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.416255 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-nbvwd"] Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.435964 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr445\" (UniqueName: \"kubernetes.io/projected/ba02b5bd-787d-4939-bfd7-5875a25173dc-kube-api-access-vr445\") pod \"glance-b232-account-create-update-rlm2c\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: W1126 13:43:17.471701 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0449bc56_cc0a_42e9_a59e_c89c8af8b64c.slice/crio-7eb4ff1f6afc67fec7975be7224c589c53212cadd71a734abc347078156ae1c6 WatchSource:0}: Error finding container 7eb4ff1f6afc67fec7975be7224c589c53212cadd71a734abc347078156ae1c6: Status 404 returned error can't find the container with id 7eb4ff1f6afc67fec7975be7224c589c53212cadd71a734abc347078156ae1c6 Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.528922 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b669-account-create-update-sksw2"] Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.555901 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.777112 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dm6ln"] Nov 26 13:43:17 crc kubenswrapper[4695]: W1126 13:43:17.879516 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88cbdc83_2c93_4fa9_8fca_dcce51f8f59d.slice/crio-f5ba6e22c40c24d60d97aee639b46647e341288d9785227d889a4677020fedcf WatchSource:0}: Error finding container f5ba6e22c40c24d60d97aee639b46647e341288d9785227d889a4677020fedcf: Status 404 returned error can't find the container with id f5ba6e22c40c24d60d97aee639b46647e341288d9785227d889a4677020fedcf Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.897842 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3897-account-create-update-dvm5t"] Nov 26 13:43:17 crc kubenswrapper[4695]: W1126 13:43:17.996048 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d3c131c_b74a_4f3e_bf6d_c490a8d300c7.slice/crio-fa85238d888f9f9d0a85286b383c37f37dff22b544fb01078af43412ae51e556 WatchSource:0}: Error finding container fa85238d888f9f9d0a85286b383c37f37dff22b544fb01078af43412ae51e556: Status 404 returned error can't find the container with id fa85238d888f9f9d0a85286b383c37f37dff22b544fb01078af43412ae51e556 Nov 26 13:43:17 crc kubenswrapper[4695]: I1126 13:43:17.998146 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-89jz2"] Nov 26 13:43:18 crc kubenswrapper[4695]: I1126 13:43:18.158284 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b232-account-create-update-rlm2c"] Nov 26 13:43:18 crc kubenswrapper[4695]: W1126 13:43:18.161217 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba02b5bd_787d_4939_bfd7_5875a25173dc.slice/crio-01a679c3fe6d2597141f48e3346062f808cfad96ba151b911f85cf196400b76e WatchSource:0}: Error finding container 01a679c3fe6d2597141f48e3346062f808cfad96ba151b911f85cf196400b76e: Status 404 returned error can't find the container with id 01a679c3fe6d2597141f48e3346062f808cfad96ba151b911f85cf196400b76e Nov 26 13:43:18 crc kubenswrapper[4695]: I1126 13:43:18.479679 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b669-account-create-update-sksw2" event={"ID":"889f9407-9537-4078-91f9-01e10810dd66","Type":"ContainerStarted","Data":"ac7a7bea4224c84a0a8033fd5eb7701ab5f047dc00fbee6ed3f6856703b6c777"} Nov 26 13:43:18 crc kubenswrapper[4695]: I1126 13:43:18.481053 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbvwd" event={"ID":"0449bc56-cc0a-42e9-a59e-c89c8af8b64c","Type":"ContainerStarted","Data":"7eb4ff1f6afc67fec7975be7224c589c53212cadd71a734abc347078156ae1c6"} Nov 26 13:43:18 crc kubenswrapper[4695]: I1126 13:43:18.482199 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3897-account-create-update-dvm5t" event={"ID":"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d","Type":"ContainerStarted","Data":"f5ba6e22c40c24d60d97aee639b46647e341288d9785227d889a4677020fedcf"} Nov 26 13:43:18 crc kubenswrapper[4695]: I1126 13:43:18.483166 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b232-account-create-update-rlm2c" event={"ID":"ba02b5bd-787d-4939-bfd7-5875a25173dc","Type":"ContainerStarted","Data":"01a679c3fe6d2597141f48e3346062f808cfad96ba151b911f85cf196400b76e"} Nov 26 13:43:18 crc kubenswrapper[4695]: I1126 13:43:18.484160 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-89jz2" event={"ID":"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7","Type":"ContainerStarted","Data":"fa85238d888f9f9d0a85286b383c37f37dff22b544fb01078af43412ae51e556"} Nov 26 13:43:18 crc kubenswrapper[4695]: I1126 13:43:18.487460 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dm6ln" event={"ID":"ce3c192f-2601-41cc-a59f-716af5cffc4c","Type":"ContainerStarted","Data":"b4e358b4954e76fe9625149ad27d67276e06d81a00c2d149ba22433c0853e3e6"} Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.496323 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-89jz2" event={"ID":"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7","Type":"ContainerStarted","Data":"b135f9e3bb5cf0f6f018142fb9b30fa3e7ef58f6458e703f8285c728dc576975"} Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.497902 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dm6ln" event={"ID":"ce3c192f-2601-41cc-a59f-716af5cffc4c","Type":"ContainerStarted","Data":"ccb276f1eff0de3d00f846752385ce7e0830484da77fcffb75dc76e9bc16af79"} Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.499870 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b669-account-create-update-sksw2" event={"ID":"889f9407-9537-4078-91f9-01e10810dd66","Type":"ContainerStarted","Data":"5c0eb75d07f93655baac364818a492a7fc447a82bb8e3dbc65049812e44edd8d"} Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.501380 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbvwd" event={"ID":"0449bc56-cc0a-42e9-a59e-c89c8af8b64c","Type":"ContainerStarted","Data":"4e8b42fecfde496f6c6b1290b512311a344408571580bac4b6e2dd6a63a770cc"} Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.502819 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3897-account-create-update-dvm5t" event={"ID":"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d","Type":"ContainerStarted","Data":"e145ca5294c61ea43fc706c3ba959e1770f298eb127f1ae6ad2a8dbb8d4f7e06"} Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.504222 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b232-account-create-update-rlm2c" event={"ID":"ba02b5bd-787d-4939-bfd7-5875a25173dc","Type":"ContainerStarted","Data":"7fa0c9ffc72bc74b80d11112d408d191f3e8f6cd3d12c6f4b7c26c24052562c9"} Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.520518 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-nbvwd" podStartSLOduration=3.520491555 podStartE2EDuration="3.520491555s" podCreationTimestamp="2025-11-26 13:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:19.515237718 +0000 UTC m=+1183.151062800" watchObservedRunningTime="2025-11-26 13:43:19.520491555 +0000 UTC m=+1183.156316647" Nov 26 13:43:19 crc kubenswrapper[4695]: I1126 13:43:19.533133 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b232-account-create-update-rlm2c" podStartSLOduration=2.533115406 podStartE2EDuration="2.533115406s" podCreationTimestamp="2025-11-26 13:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:19.52630836 +0000 UTC m=+1183.162133462" watchObservedRunningTime="2025-11-26 13:43:19.533115406 +0000 UTC m=+1183.168940488" Nov 26 13:43:20 crc kubenswrapper[4695]: I1126 13:43:20.540513 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-b669-account-create-update-sksw2" podStartSLOduration=4.540493447 podStartE2EDuration="4.540493447s" podCreationTimestamp="2025-11-26 13:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:20.533573028 +0000 UTC m=+1184.169398120" watchObservedRunningTime="2025-11-26 13:43:20.540493447 +0000 UTC m=+1184.176318529" Nov 26 13:43:20 crc kubenswrapper[4695]: I1126 13:43:20.552878 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-dm6ln" podStartSLOduration=4.5528555520000005 podStartE2EDuration="4.552855552s" podCreationTimestamp="2025-11-26 13:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:20.547505651 +0000 UTC m=+1184.183330753" watchObservedRunningTime="2025-11-26 13:43:20.552855552 +0000 UTC m=+1184.188680634" Nov 26 13:43:20 crc kubenswrapper[4695]: I1126 13:43:20.580202 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-89jz2" podStartSLOduration=3.580166341 podStartE2EDuration="3.580166341s" podCreationTimestamp="2025-11-26 13:43:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:20.566487726 +0000 UTC m=+1184.202312838" watchObservedRunningTime="2025-11-26 13:43:20.580166341 +0000 UTC m=+1184.215991453" Nov 26 13:43:20 crc kubenswrapper[4695]: I1126 13:43:20.590087 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3897-account-create-update-dvm5t" podStartSLOduration=4.590065757 podStartE2EDuration="4.590065757s" podCreationTimestamp="2025-11-26 13:43:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:20.579558131 +0000 UTC m=+1184.215383223" watchObservedRunningTime="2025-11-26 13:43:20.590065757 +0000 UTC m=+1184.225890849" Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:21.523207 4695 generic.go:334] "Generic (PLEG): container finished" podID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerID="cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854" exitCode=0 Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:21.523309 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27495d77-50c6-4476-86c3-dafb0e5dbb97","Type":"ContainerDied","Data":"cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854"} Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:21.525932 4695 generic.go:334] "Generic (PLEG): container finished" podID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerID="67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93" exitCode=0 Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:21.525981 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a771ad6-98a9-474e-83f0-e17fecdee9be","Type":"ContainerDied","Data":"67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93"} Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:22.534719 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a771ad6-98a9-474e-83f0-e17fecdee9be","Type":"ContainerStarted","Data":"bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c"} Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:22.537881 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27495d77-50c6-4476-86c3-dafb0e5dbb97","Type":"ContainerStarted","Data":"7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3"} Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:23.544071 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:23.572752 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=60.949152314 podStartE2EDuration="1m1.572710762s" podCreationTimestamp="2025-11-26 13:42:22 +0000 UTC" firstStartedPulling="2025-11-26 13:42:46.675955001 +0000 UTC m=+1150.311780093" lastFinishedPulling="2025-11-26 13:42:47.299513459 +0000 UTC m=+1150.935338541" observedRunningTime="2025-11-26 13:43:23.563033025 +0000 UTC m=+1187.198858147" watchObservedRunningTime="2025-11-26 13:43:23.572710762 +0000 UTC m=+1187.208535854" Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:23.577454 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 26 13:43:25 crc kubenswrapper[4695]: I1126 13:43:23.594863 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.305867755 podStartE2EDuration="1m1.594843837s" podCreationTimestamp="2025-11-26 13:42:22 +0000 UTC" firstStartedPulling="2025-11-26 13:42:27.570201924 +0000 UTC m=+1131.206026996" lastFinishedPulling="2025-11-26 13:42:46.859177996 +0000 UTC m=+1150.495003078" observedRunningTime="2025-11-26 13:43:23.586136729 +0000 UTC m=+1187.221961831" watchObservedRunningTime="2025-11-26 13:43:23.594843837 +0000 UTC m=+1187.230668939" Nov 26 13:43:27 crc kubenswrapper[4695]: I1126 13:43:27.345903 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zvx8d" podUID="9f98833b-dbaf-42bc-a424-8094e025ce87" containerName="ovn-controller" probeResult="failure" output=< Nov 26 13:43:27 crc kubenswrapper[4695]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 26 13:43:27 crc kubenswrapper[4695]: > Nov 26 13:43:27 crc kubenswrapper[4695]: I1126 13:43:27.412917 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:43:29 crc kubenswrapper[4695]: I1126 13:43:29.596757 4695 generic.go:334] "Generic (PLEG): container finished" podID="bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" containerID="3c5a372ca5c4a908b8f2cf1254b9c671b7c0fc907e205262a541b5292c992d36" exitCode=0 Nov 26 13:43:29 crc kubenswrapper[4695]: I1126 13:43:29.596909 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n2464" event={"ID":"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205","Type":"ContainerDied","Data":"3c5a372ca5c4a908b8f2cf1254b9c671b7c0fc907e205262a541b5292c992d36"} Nov 26 13:43:29 crc kubenswrapper[4695]: I1126 13:43:29.600007 4695 generic.go:334] "Generic (PLEG): container finished" podID="0449bc56-cc0a-42e9-a59e-c89c8af8b64c" containerID="4e8b42fecfde496f6c6b1290b512311a344408571580bac4b6e2dd6a63a770cc" exitCode=0 Nov 26 13:43:29 crc kubenswrapper[4695]: I1126 13:43:29.600056 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbvwd" event={"ID":"0449bc56-cc0a-42e9-a59e-c89c8af8b64c","Type":"ContainerDied","Data":"4e8b42fecfde496f6c6b1290b512311a344408571580bac4b6e2dd6a63a770cc"} Nov 26 13:43:30 crc kubenswrapper[4695]: I1126 13:43:30.610160 4695 generic.go:334] "Generic (PLEG): container finished" podID="6d3c131c-b74a-4f3e-bf6d-c490a8d300c7" containerID="b135f9e3bb5cf0f6f018142fb9b30fa3e7ef58f6458e703f8285c728dc576975" exitCode=0 Nov 26 13:43:30 crc kubenswrapper[4695]: I1126 13:43:30.610333 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-89jz2" event={"ID":"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7","Type":"ContainerDied","Data":"b135f9e3bb5cf0f6f018142fb9b30fa3e7ef58f6458e703f8285c728dc576975"} Nov 26 13:43:30 crc kubenswrapper[4695]: I1126 13:43:30.998164 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.008506 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.128841 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-swiftconf\") pod \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.128913 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-ring-data-devices\") pod \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.128945 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknxx\" (UniqueName: \"kubernetes.io/projected/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-kube-api-access-tknxx\") pod \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.128983 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-combined-ca-bundle\") pod \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.129019 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-dispersionconf\") pod \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.129043 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-scripts\") pod \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.129070 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67f75\" (UniqueName: \"kubernetes.io/projected/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-kube-api-access-67f75\") pod \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.129171 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-etc-swift\") pod \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\" (UID: \"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.129201 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-operator-scripts\") pod \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\" (UID: \"0449bc56-cc0a-42e9-a59e-c89c8af8b64c\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.130487 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0449bc56-cc0a-42e9-a59e-c89c8af8b64c" (UID: "0449bc56-cc0a-42e9-a59e-c89c8af8b64c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.130864 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" (UID: "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.131495 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" (UID: "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.149625 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-kube-api-access-67f75" (OuterVolumeSpecName: "kube-api-access-67f75") pod "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" (UID: "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205"). InnerVolumeSpecName "kube-api-access-67f75". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.149680 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-kube-api-access-tknxx" (OuterVolumeSpecName: "kube-api-access-tknxx") pod "0449bc56-cc0a-42e9-a59e-c89c8af8b64c" (UID: "0449bc56-cc0a-42e9-a59e-c89c8af8b64c"). InnerVolumeSpecName "kube-api-access-tknxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.153651 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-scripts" (OuterVolumeSpecName: "scripts") pod "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" (UID: "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.155528 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" (UID: "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.156111 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" (UID: "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.161060 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" (UID: "bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230835 4695 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230884 4695 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230897 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tknxx\" (UniqueName: \"kubernetes.io/projected/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-kube-api-access-tknxx\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230905 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230913 4695 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230921 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230928 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67f75\" (UniqueName: \"kubernetes.io/projected/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-kube-api-access-67f75\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230936 4695 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.230944 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0449bc56-cc0a-42e9-a59e-c89c8af8b64c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.617687 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-nbvwd" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.617686 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-nbvwd" event={"ID":"0449bc56-cc0a-42e9-a59e-c89c8af8b64c","Type":"ContainerDied","Data":"7eb4ff1f6afc67fec7975be7224c589c53212cadd71a734abc347078156ae1c6"} Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.617802 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb4ff1f6afc67fec7975be7224c589c53212cadd71a734abc347078156ae1c6" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.619165 4695 generic.go:334] "Generic (PLEG): container finished" podID="88cbdc83-2c93-4fa9-8fca-dcce51f8f59d" containerID="e145ca5294c61ea43fc706c3ba959e1770f298eb127f1ae6ad2a8dbb8d4f7e06" exitCode=0 Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.619201 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3897-account-create-update-dvm5t" event={"ID":"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d","Type":"ContainerDied","Data":"e145ca5294c61ea43fc706c3ba959e1770f298eb127f1ae6ad2a8dbb8d4f7e06"} Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.621985 4695 generic.go:334] "Generic (PLEG): container finished" podID="ba02b5bd-787d-4939-bfd7-5875a25173dc" containerID="7fa0c9ffc72bc74b80d11112d408d191f3e8f6cd3d12c6f4b7c26c24052562c9" exitCode=0 Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.622051 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b232-account-create-update-rlm2c" event={"ID":"ba02b5bd-787d-4939-bfd7-5875a25173dc","Type":"ContainerDied","Data":"7fa0c9ffc72bc74b80d11112d408d191f3e8f6cd3d12c6f4b7c26c24052562c9"} Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.623960 4695 generic.go:334] "Generic (PLEG): container finished" podID="ce3c192f-2601-41cc-a59f-716af5cffc4c" containerID="ccb276f1eff0de3d00f846752385ce7e0830484da77fcffb75dc76e9bc16af79" exitCode=0 Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.624032 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dm6ln" event={"ID":"ce3c192f-2601-41cc-a59f-716af5cffc4c","Type":"ContainerDied","Data":"ccb276f1eff0de3d00f846752385ce7e0830484da77fcffb75dc76e9bc16af79"} Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.625342 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-n2464" event={"ID":"bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205","Type":"ContainerDied","Data":"b11f0e4166c4eb2cdf5bb59b17a200c40bf369ec94a31a87dd9460bd4460ea85"} Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.625380 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b11f0e4166c4eb2cdf5bb59b17a200c40bf369ec94a31a87dd9460bd4460ea85" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.625379 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-n2464" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.626897 4695 generic.go:334] "Generic (PLEG): container finished" podID="889f9407-9537-4078-91f9-01e10810dd66" containerID="5c0eb75d07f93655baac364818a492a7fc447a82bb8e3dbc65049812e44edd8d" exitCode=0 Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.627011 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b669-account-create-update-sksw2" event={"ID":"889f9407-9537-4078-91f9-01e10810dd66","Type":"ContainerDied","Data":"5c0eb75d07f93655baac364818a492a7fc447a82bb8e3dbc65049812e44edd8d"} Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.745638 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.759737 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b77b4e90-5d1a-4724-a57f-2ff4a394d434-etc-swift\") pod \"swift-storage-0\" (UID: \"b77b4e90-5d1a-4724-a57f-2ff4a394d434\") " pod="openstack/swift-storage-0" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.860396 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-89jz2" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.894045 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.947975 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-operator-scripts\") pod \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.948068 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvsqr\" (UniqueName: \"kubernetes.io/projected/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-kube-api-access-gvsqr\") pod \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\" (UID: \"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7\") " Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.949194 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d3c131c-b74a-4f3e-bf6d-c490a8d300c7" (UID: "6d3c131c-b74a-4f3e-bf6d-c490a8d300c7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:31 crc kubenswrapper[4695]: I1126 13:43:31.952460 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-kube-api-access-gvsqr" (OuterVolumeSpecName: "kube-api-access-gvsqr") pod "6d3c131c-b74a-4f3e-bf6d-c490a8d300c7" (UID: "6d3c131c-b74a-4f3e-bf6d-c490a8d300c7"). InnerVolumeSpecName "kube-api-access-gvsqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.049676 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.049705 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvsqr\" (UniqueName: \"kubernetes.io/projected/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7-kube-api-access-gvsqr\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.337037 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-zvx8d" podUID="9f98833b-dbaf-42bc-a424-8094e025ce87" containerName="ovn-controller" probeResult="failure" output=< Nov 26 13:43:32 crc kubenswrapper[4695]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 26 13:43:32 crc kubenswrapper[4695]: > Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.412170 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rtt8r" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.414725 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 26 13:43:32 crc kubenswrapper[4695]: W1126 13:43:32.420124 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb77b4e90_5d1a_4724_a57f_2ff4a394d434.slice/crio-6a55c9edac174003cf3e44ceb38ba1705c744d88f4bcc2a97e946357c97824da WatchSource:0}: Error finding container 6a55c9edac174003cf3e44ceb38ba1705c744d88f4bcc2a97e946357c97824da: Status 404 returned error can't find the container with id 6a55c9edac174003cf3e44ceb38ba1705c744d88f4bcc2a97e946357c97824da Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.625820 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zvx8d-config-jz5sp"] Nov 26 13:43:32 crc kubenswrapper[4695]: E1126 13:43:32.626153 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d3c131c-b74a-4f3e-bf6d-c490a8d300c7" containerName="mariadb-database-create" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.626165 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d3c131c-b74a-4f3e-bf6d-c490a8d300c7" containerName="mariadb-database-create" Nov 26 13:43:32 crc kubenswrapper[4695]: E1126 13:43:32.626176 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0449bc56-cc0a-42e9-a59e-c89c8af8b64c" containerName="mariadb-database-create" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.626182 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="0449bc56-cc0a-42e9-a59e-c89c8af8b64c" containerName="mariadb-database-create" Nov 26 13:43:32 crc kubenswrapper[4695]: E1126 13:43:32.626214 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" containerName="swift-ring-rebalance" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.626222 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" containerName="swift-ring-rebalance" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.626388 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3c131c-b74a-4f3e-bf6d-c490a8d300c7" containerName="mariadb-database-create" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.626401 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205" containerName="swift-ring-rebalance" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.626408 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="0449bc56-cc0a-42e9-a59e-c89c8af8b64c" containerName="mariadb-database-create" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.626999 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.629257 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.635966 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-89jz2" event={"ID":"6d3c131c-b74a-4f3e-bf6d-c490a8d300c7","Type":"ContainerDied","Data":"fa85238d888f9f9d0a85286b383c37f37dff22b544fb01078af43412ae51e556"} Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.635999 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa85238d888f9f9d0a85286b383c37f37dff22b544fb01078af43412ae51e556" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.636081 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-89jz2" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.646717 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"6a55c9edac174003cf3e44ceb38ba1705c744d88f4bcc2a97e946357c97824da"} Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.659338 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvx8d-config-jz5sp"] Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.761447 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-log-ovn\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.761821 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-scripts\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.761880 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run-ovn\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.761907 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.761954 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-additional-scripts\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.761988 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfq78\" (UniqueName: \"kubernetes.io/projected/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-kube-api-access-cfq78\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.863839 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfq78\" (UniqueName: \"kubernetes.io/projected/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-kube-api-access-cfq78\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.863935 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-log-ovn\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.864010 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-scripts\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.864098 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run-ovn\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.864142 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.864198 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-additional-scripts\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.864527 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.864553 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run-ovn\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.864631 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-log-ovn\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.865345 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-additional-scripts\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.867471 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-scripts\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.887670 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfq78\" (UniqueName: \"kubernetes.io/projected/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-kube-api-access-cfq78\") pod \"ovn-controller-zvx8d-config-jz5sp\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.956688 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:32 crc kubenswrapper[4695]: I1126 13:43:32.972697 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.066860 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba02b5bd-787d-4939-bfd7-5875a25173dc-operator-scripts\") pod \"ba02b5bd-787d-4939-bfd7-5875a25173dc\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.066915 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr445\" (UniqueName: \"kubernetes.io/projected/ba02b5bd-787d-4939-bfd7-5875a25173dc-kube-api-access-vr445\") pod \"ba02b5bd-787d-4939-bfd7-5875a25173dc\" (UID: \"ba02b5bd-787d-4939-bfd7-5875a25173dc\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.068272 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba02b5bd-787d-4939-bfd7-5875a25173dc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba02b5bd-787d-4939-bfd7-5875a25173dc" (UID: "ba02b5bd-787d-4939-bfd7-5875a25173dc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.074142 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba02b5bd-787d-4939-bfd7-5875a25173dc-kube-api-access-vr445" (OuterVolumeSpecName: "kube-api-access-vr445") pod "ba02b5bd-787d-4939-bfd7-5875a25173dc" (UID: "ba02b5bd-787d-4939-bfd7-5875a25173dc"). InnerVolumeSpecName "kube-api-access-vr445". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.168741 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba02b5bd-787d-4939-bfd7-5875a25173dc-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.168781 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr445\" (UniqueName: \"kubernetes.io/projected/ba02b5bd-787d-4939-bfd7-5875a25173dc-kube-api-access-vr445\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.191214 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.211441 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.228450 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.270106 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-operator-scripts\") pod \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.270255 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79n52\" (UniqueName: \"kubernetes.io/projected/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-kube-api-access-79n52\") pod \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\" (UID: \"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.270794 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88cbdc83-2c93-4fa9-8fca-dcce51f8f59d" (UID: "88cbdc83-2c93-4fa9-8fca-dcce51f8f59d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.277667 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-kube-api-access-79n52" (OuterVolumeSpecName: "kube-api-access-79n52") pod "88cbdc83-2c93-4fa9-8fca-dcce51f8f59d" (UID: "88cbdc83-2c93-4fa9-8fca-dcce51f8f59d"). InnerVolumeSpecName "kube-api-access-79n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.371930 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8442l\" (UniqueName: \"kubernetes.io/projected/889f9407-9537-4078-91f9-01e10810dd66-kube-api-access-8442l\") pod \"889f9407-9537-4078-91f9-01e10810dd66\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.372406 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3c192f-2601-41cc-a59f-716af5cffc4c-operator-scripts\") pod \"ce3c192f-2601-41cc-a59f-716af5cffc4c\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.372558 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889f9407-9537-4078-91f9-01e10810dd66-operator-scripts\") pod \"889f9407-9537-4078-91f9-01e10810dd66\" (UID: \"889f9407-9537-4078-91f9-01e10810dd66\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.372592 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nl5p\" (UniqueName: \"kubernetes.io/projected/ce3c192f-2601-41cc-a59f-716af5cffc4c-kube-api-access-8nl5p\") pod \"ce3c192f-2601-41cc-a59f-716af5cffc4c\" (UID: \"ce3c192f-2601-41cc-a59f-716af5cffc4c\") " Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.372933 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce3c192f-2601-41cc-a59f-716af5cffc4c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ce3c192f-2601-41cc-a59f-716af5cffc4c" (UID: "ce3c192f-2601-41cc-a59f-716af5cffc4c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.373125 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79n52\" (UniqueName: \"kubernetes.io/projected/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-kube-api-access-79n52\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.373144 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ce3c192f-2601-41cc-a59f-716af5cffc4c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.373155 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.373205 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/889f9407-9537-4078-91f9-01e10810dd66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "889f9407-9537-4078-91f9-01e10810dd66" (UID: "889f9407-9537-4078-91f9-01e10810dd66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.375630 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce3c192f-2601-41cc-a59f-716af5cffc4c-kube-api-access-8nl5p" (OuterVolumeSpecName: "kube-api-access-8nl5p") pod "ce3c192f-2601-41cc-a59f-716af5cffc4c" (UID: "ce3c192f-2601-41cc-a59f-716af5cffc4c"). InnerVolumeSpecName "kube-api-access-8nl5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.376924 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/889f9407-9537-4078-91f9-01e10810dd66-kube-api-access-8442l" (OuterVolumeSpecName: "kube-api-access-8442l") pod "889f9407-9537-4078-91f9-01e10810dd66" (UID: "889f9407-9537-4078-91f9-01e10810dd66"). InnerVolumeSpecName "kube-api-access-8442l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.474551 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/889f9407-9537-4078-91f9-01e10810dd66-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.474589 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nl5p\" (UniqueName: \"kubernetes.io/projected/ce3c192f-2601-41cc-a59f-716af5cffc4c-kube-api-access-8nl5p\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.474600 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8442l\" (UniqueName: \"kubernetes.io/projected/889f9407-9537-4078-91f9-01e10810dd66-kube-api-access-8442l\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.579540 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 26 13:43:33 crc kubenswrapper[4695]: W1126 13:43:33.591022 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5ad11a4c_505b_4cba_b70c_5f745ffdfd0c.slice/crio-5e78a0ffde1ed1bc3f3fdc7c637b5d8dc45004d801a3b8f041df341e29ca3b81 WatchSource:0}: Error finding container 5e78a0ffde1ed1bc3f3fdc7c637b5d8dc45004d801a3b8f041df341e29ca3b81: Status 404 returned error can't find the container with id 5e78a0ffde1ed1bc3f3fdc7c637b5d8dc45004d801a3b8f041df341e29ca3b81 Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.591521 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvx8d-config-jz5sp"] Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.665346 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3897-account-create-update-dvm5t" event={"ID":"88cbdc83-2c93-4fa9-8fca-dcce51f8f59d","Type":"ContainerDied","Data":"f5ba6e22c40c24d60d97aee639b46647e341288d9785227d889a4677020fedcf"} Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.665407 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5ba6e22c40c24d60d97aee639b46647e341288d9785227d889a4677020fedcf" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.665476 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3897-account-create-update-dvm5t" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.667238 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-jz5sp" event={"ID":"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c","Type":"ContainerStarted","Data":"5e78a0ffde1ed1bc3f3fdc7c637b5d8dc45004d801a3b8f041df341e29ca3b81"} Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.672698 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b232-account-create-update-rlm2c" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.673080 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b232-account-create-update-rlm2c" event={"ID":"ba02b5bd-787d-4939-bfd7-5875a25173dc","Type":"ContainerDied","Data":"01a679c3fe6d2597141f48e3346062f808cfad96ba151b911f85cf196400b76e"} Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.673141 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01a679c3fe6d2597141f48e3346062f808cfad96ba151b911f85cf196400b76e" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.679869 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dm6ln" event={"ID":"ce3c192f-2601-41cc-a59f-716af5cffc4c","Type":"ContainerDied","Data":"b4e358b4954e76fe9625149ad27d67276e06d81a00c2d149ba22433c0853e3e6"} Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.679910 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e358b4954e76fe9625149ad27d67276e06d81a00c2d149ba22433c0853e3e6" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.679945 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dm6ln" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.688037 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b669-account-create-update-sksw2" event={"ID":"889f9407-9537-4078-91f9-01e10810dd66","Type":"ContainerDied","Data":"ac7a7bea4224c84a0a8033fd5eb7701ab5f047dc00fbee6ed3f6856703b6c777"} Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.688086 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac7a7bea4224c84a0a8033fd5eb7701ab5f047dc00fbee6ed3f6856703b6c777" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.688281 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b669-account-create-update-sksw2" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.895149 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-n95vb"] Nov 26 13:43:33 crc kubenswrapper[4695]: E1126 13:43:33.896385 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88cbdc83-2c93-4fa9-8fca-dcce51f8f59d" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896406 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="88cbdc83-2c93-4fa9-8fca-dcce51f8f59d" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: E1126 13:43:33.896424 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="889f9407-9537-4078-91f9-01e10810dd66" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896455 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="889f9407-9537-4078-91f9-01e10810dd66" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: E1126 13:43:33.896474 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce3c192f-2601-41cc-a59f-716af5cffc4c" containerName="mariadb-database-create" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896482 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce3c192f-2601-41cc-a59f-716af5cffc4c" containerName="mariadb-database-create" Nov 26 13:43:33 crc kubenswrapper[4695]: E1126 13:43:33.896506 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba02b5bd-787d-4939-bfd7-5875a25173dc" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896537 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba02b5bd-787d-4939-bfd7-5875a25173dc" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896828 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba02b5bd-787d-4939-bfd7-5875a25173dc" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896880 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="889f9407-9537-4078-91f9-01e10810dd66" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896897 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="88cbdc83-2c93-4fa9-8fca-dcce51f8f59d" containerName="mariadb-account-create-update" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.896912 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce3c192f-2601-41cc-a59f-716af5cffc4c" containerName="mariadb-database-create" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.898056 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.917297 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-n95vb"] Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.938004 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.991423 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rn25\" (UniqueName: \"kubernetes.io/projected/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-kube-api-access-5rn25\") pod \"cinder-db-create-n95vb\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.991475 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-operator-scripts\") pod \"cinder-db-create-n95vb\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:33 crc kubenswrapper[4695]: I1126 13:43:33.998668 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-k62d7"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.000135 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.019500 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-9d5a-account-create-update-h8n7j"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.020757 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.029511 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.029546 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k62d7"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.036032 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9d5a-account-create-update-h8n7j"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.092652 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rn25\" (UniqueName: \"kubernetes.io/projected/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-kube-api-access-5rn25\") pod \"cinder-db-create-n95vb\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.092691 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-operator-scripts\") pod \"cinder-db-create-n95vb\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.092772 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d572e570-4517-4440-9226-7c432f0e318c-operator-scripts\") pod \"barbican-db-create-k62d7\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.092830 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sf56\" (UniqueName: \"kubernetes.io/projected/d572e570-4517-4440-9226-7c432f0e318c-kube-api-access-4sf56\") pod \"barbican-db-create-k62d7\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.092877 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-operator-scripts\") pod \"cinder-9d5a-account-create-update-h8n7j\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.092914 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5h4j\" (UniqueName: \"kubernetes.io/projected/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-kube-api-access-g5h4j\") pod \"cinder-9d5a-account-create-update-h8n7j\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.094086 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-operator-scripts\") pod \"cinder-db-create-n95vb\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.099104 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8be9-account-create-update-shnqq"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.100318 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.109263 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.116933 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8be9-account-create-update-shnqq"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.154754 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rn25\" (UniqueName: \"kubernetes.io/projected/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-kube-api-access-5rn25\") pod \"cinder-db-create-n95vb\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.194037 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d572e570-4517-4440-9226-7c432f0e318c-operator-scripts\") pod \"barbican-db-create-k62d7\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.194453 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4820ce3c-a324-4d56-a27a-9dd2695e286f-operator-scripts\") pod \"barbican-8be9-account-create-update-shnqq\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.194519 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sf56\" (UniqueName: \"kubernetes.io/projected/d572e570-4517-4440-9226-7c432f0e318c-kube-api-access-4sf56\") pod \"barbican-db-create-k62d7\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.194545 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rksmz\" (UniqueName: \"kubernetes.io/projected/4820ce3c-a324-4d56-a27a-9dd2695e286f-kube-api-access-rksmz\") pod \"barbican-8be9-account-create-update-shnqq\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.194581 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-operator-scripts\") pod \"cinder-9d5a-account-create-update-h8n7j\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.194616 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5h4j\" (UniqueName: \"kubernetes.io/projected/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-kube-api-access-g5h4j\") pod \"cinder-9d5a-account-create-update-h8n7j\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.194710 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d572e570-4517-4440-9226-7c432f0e318c-operator-scripts\") pod \"barbican-db-create-k62d7\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.195199 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-operator-scripts\") pod \"cinder-9d5a-account-create-update-h8n7j\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.228091 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sf56\" (UniqueName: \"kubernetes.io/projected/d572e570-4517-4440-9226-7c432f0e318c-kube-api-access-4sf56\") pod \"barbican-db-create-k62d7\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.234468 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5h4j\" (UniqueName: \"kubernetes.io/projected/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-kube-api-access-g5h4j\") pod \"cinder-9d5a-account-create-update-h8n7j\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.280232 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.280928 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-c5874"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.282162 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.295054 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c5874"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.295553 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rksmz\" (UniqueName: \"kubernetes.io/projected/4820ce3c-a324-4d56-a27a-9dd2695e286f-kube-api-access-rksmz\") pod \"barbican-8be9-account-create-update-shnqq\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.295699 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4820ce3c-a324-4d56-a27a-9dd2695e286f-operator-scripts\") pod \"barbican-8be9-account-create-update-shnqq\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.296319 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4820ce3c-a324-4d56-a27a-9dd2695e286f-operator-scripts\") pod \"barbican-8be9-account-create-update-shnqq\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.316860 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rksmz\" (UniqueName: \"kubernetes.io/projected/4820ce3c-a324-4d56-a27a-9dd2695e286f-kube-api-access-rksmz\") pod \"barbican-8be9-account-create-update-shnqq\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.318724 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.349953 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.385607 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e1ab-account-create-update-tqhkx"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.387275 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.392694 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.399248 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c86a814-b63d-48d8-98de-6f560864c876-operator-scripts\") pod \"neutron-db-create-c5874\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.399435 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvft7\" (UniqueName: \"kubernetes.io/projected/4c86a814-b63d-48d8-98de-6f560864c876-kube-api-access-fvft7\") pod \"neutron-db-create-c5874\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.419644 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e1ab-account-create-update-tqhkx"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.470775 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.503280 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvft7\" (UniqueName: \"kubernetes.io/projected/4c86a814-b63d-48d8-98de-6f560864c876-kube-api-access-fvft7\") pod \"neutron-db-create-c5874\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.515340 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4c9h\" (UniqueName: \"kubernetes.io/projected/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-kube-api-access-w4c9h\") pod \"neutron-e1ab-account-create-update-tqhkx\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.515558 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-operator-scripts\") pod \"neutron-e1ab-account-create-update-tqhkx\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.515604 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c86a814-b63d-48d8-98de-6f560864c876-operator-scripts\") pod \"neutron-db-create-c5874\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.516489 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c86a814-b63d-48d8-98de-6f560864c876-operator-scripts\") pod \"neutron-db-create-c5874\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.529437 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvft7\" (UniqueName: \"kubernetes.io/projected/4c86a814-b63d-48d8-98de-6f560864c876-kube-api-access-fvft7\") pod \"neutron-db-create-c5874\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.617697 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-operator-scripts\") pod \"neutron-e1ab-account-create-update-tqhkx\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.617836 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4c9h\" (UniqueName: \"kubernetes.io/projected/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-kube-api-access-w4c9h\") pod \"neutron-e1ab-account-create-update-tqhkx\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.618879 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-operator-scripts\") pod \"neutron-e1ab-account-create-update-tqhkx\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.643162 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4c9h\" (UniqueName: \"kubernetes.io/projected/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-kube-api-access-w4c9h\") pod \"neutron-e1ab-account-create-update-tqhkx\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.645723 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5874" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.727106 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-jz5sp" event={"ID":"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c","Type":"ContainerStarted","Data":"e04dba50bdacaa6d716eb39c213506ec7380b5a69f95a1b88ecc934d4e75d15c"} Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.730916 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.895229 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-n95vb"] Nov 26 13:43:34 crc kubenswrapper[4695]: I1126 13:43:34.937340 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-k62d7"] Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.047269 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8be9-account-create-update-shnqq"] Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.110599 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-9d5a-account-create-update-h8n7j"] Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.337314 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-c5874"] Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.354922 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e1ab-account-create-update-tqhkx"] Nov 26 13:43:35 crc kubenswrapper[4695]: W1126 13:43:35.372640 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c86a814_b63d_48d8_98de_6f560864c876.slice/crio-5189452eeab11f8b4ea70c056827afa5f6354b5c214621175d72a50adc81edf6 WatchSource:0}: Error finding container 5189452eeab11f8b4ea70c056827afa5f6354b5c214621175d72a50adc81edf6: Status 404 returned error can't find the container with id 5189452eeab11f8b4ea70c056827afa5f6354b5c214621175d72a50adc81edf6 Nov 26 13:43:35 crc kubenswrapper[4695]: W1126 13:43:35.383299 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda17d04d7_2a8b_4b1f_8aa4_7472ab0ce99d.slice/crio-7d60a89805d592f8e5bcd483abda88e6460bf927d2128162c67e8338169b6d7b WatchSource:0}: Error finding container 7d60a89805d592f8e5bcd483abda88e6460bf927d2128162c67e8338169b6d7b: Status 404 returned error can't find the container with id 7d60a89805d592f8e5bcd483abda88e6460bf927d2128162c67e8338169b6d7b Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.740333 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8be9-account-create-update-shnqq" event={"ID":"4820ce3c-a324-4d56-a27a-9dd2695e286f","Type":"ContainerStarted","Data":"0d88de6249a4d7f97603706af4c6b57ebf090de4b615bc1aff505126fb8b47cd"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.742089 4695 generic.go:334] "Generic (PLEG): container finished" podID="5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" containerID="e04dba50bdacaa6d716eb39c213506ec7380b5a69f95a1b88ecc934d4e75d15c" exitCode=0 Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.742153 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-jz5sp" event={"ID":"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c","Type":"ContainerDied","Data":"e04dba50bdacaa6d716eb39c213506ec7380b5a69f95a1b88ecc934d4e75d15c"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.748424 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n95vb" event={"ID":"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9","Type":"ContainerStarted","Data":"d3784b39c3303c2f7ae6d1b9d49a4eaf104ae83032ec49fc7c88c3e3fb91e7cc"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.748475 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n95vb" event={"ID":"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9","Type":"ContainerStarted","Data":"d7b2197b342eeba906f9be9e3efb0424cd313ce75e9241831aafb8e8f4d47f90"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.756649 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c5874" event={"ID":"4c86a814-b63d-48d8-98de-6f560864c876","Type":"ContainerStarted","Data":"5189452eeab11f8b4ea70c056827afa5f6354b5c214621175d72a50adc81edf6"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.759580 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k62d7" event={"ID":"d572e570-4517-4440-9226-7c432f0e318c","Type":"ContainerStarted","Data":"1146cbdefb024635c96fcb30c9a43949d0daf6fad215a9e9dfb8d21d690e8efd"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.759643 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k62d7" event={"ID":"d572e570-4517-4440-9226-7c432f0e318c","Type":"ContainerStarted","Data":"8347442813de53f49dd972ae95d5d11e77eb0227cb391dce893b658cbf9c75fb"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.761205 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e1ab-account-create-update-tqhkx" event={"ID":"9c693edf-bd03-49b1-b26e-0bca3bae1cf0","Type":"ContainerStarted","Data":"28577305bb893a1526ed3e9c26f70bacfa9fa1423472d963b582a721d61edf9a"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.765670 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9d5a-account-create-update-h8n7j" event={"ID":"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d","Type":"ContainerStarted","Data":"7d60a89805d592f8e5bcd483abda88e6460bf927d2128162c67e8338169b6d7b"} Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.769646 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-n95vb" podStartSLOduration=2.769624878 podStartE2EDuration="2.769624878s" podCreationTimestamp="2025-11-26 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:35.76498084 +0000 UTC m=+1199.400805932" watchObservedRunningTime="2025-11-26 13:43:35.769624878 +0000 UTC m=+1199.405449990" Nov 26 13:43:35 crc kubenswrapper[4695]: I1126 13:43:35.787163 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-k62d7" podStartSLOduration=2.7871369550000002 podStartE2EDuration="2.787136955s" podCreationTimestamp="2025-11-26 13:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:35.779893435 +0000 UTC m=+1199.415718517" watchObservedRunningTime="2025-11-26 13:43:35.787136955 +0000 UTC m=+1199.422962047" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.365467 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.458325 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-scripts\") pod \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.458546 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run\") pod \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.458624 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-additional-scripts\") pod \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.458765 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run-ovn\") pod \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.458792 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-log-ovn\") pod \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.458828 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfq78\" (UniqueName: \"kubernetes.io/projected/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-kube-api-access-cfq78\") pod \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\" (UID: \"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c\") " Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.459081 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" (UID: "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.459443 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run" (OuterVolumeSpecName: "var-run") pod "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" (UID: "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.459509 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" (UID: "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.460144 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" (UID: "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.460268 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-scripts" (OuterVolumeSpecName: "scripts") pod "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" (UID: "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.465589 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-kube-api-access-cfq78" (OuterVolumeSpecName: "kube-api-access-cfq78") pod "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" (UID: "5ad11a4c-505b-4cba-b70c-5f745ffdfd0c"). InnerVolumeSpecName "kube-api-access-cfq78". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.560977 4695 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.561009 4695 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.561019 4695 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.561027 4695 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.561036 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfq78\" (UniqueName: \"kubernetes.io/projected/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-kube-api-access-cfq78\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.561044 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.774998 4695 generic.go:334] "Generic (PLEG): container finished" podID="a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d" containerID="1990c48e5adce583bb4dec1a557c2138c9c77c9d2125f0f4a63eff0e766767ec" exitCode=0 Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.775725 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9d5a-account-create-update-h8n7j" event={"ID":"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d","Type":"ContainerDied","Data":"1990c48e5adce583bb4dec1a557c2138c9c77c9d2125f0f4a63eff0e766767ec"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.786557 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"65b8cc114dbdee520f5add80624e9e494037917b0764e76bb39c0e1c69fba0ca"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.786608 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"4a0915ef78a94e2d6e95dbc04713fb6537e19e0a574a7488cabf525a00505640"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.786625 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"0d69e824c6fb14670e008063f56fbea1613d1b3cede9da30829435192efe7ac5"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.791241 4695 generic.go:334] "Generic (PLEG): container finished" podID="4820ce3c-a324-4d56-a27a-9dd2695e286f" containerID="c6b71a53d6ee48329275f67c4ac1c81c799db724d6e3d7d7c942b83790991dc0" exitCode=0 Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.791353 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8be9-account-create-update-shnqq" event={"ID":"4820ce3c-a324-4d56-a27a-9dd2695e286f","Type":"ContainerDied","Data":"c6b71a53d6ee48329275f67c4ac1c81c799db724d6e3d7d7c942b83790991dc0"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.793610 4695 generic.go:334] "Generic (PLEG): container finished" podID="b26d39e6-265d-44a3-a6f3-ca353fa4d0a9" containerID="d3784b39c3303c2f7ae6d1b9d49a4eaf104ae83032ec49fc7c88c3e3fb91e7cc" exitCode=0 Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.793709 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n95vb" event={"ID":"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9","Type":"ContainerDied","Data":"d3784b39c3303c2f7ae6d1b9d49a4eaf104ae83032ec49fc7c88c3e3fb91e7cc"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.796698 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-jz5sp" event={"ID":"5ad11a4c-505b-4cba-b70c-5f745ffdfd0c","Type":"ContainerDied","Data":"5e78a0ffde1ed1bc3f3fdc7c637b5d8dc45004d801a3b8f041df341e29ca3b81"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.796736 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e78a0ffde1ed1bc3f3fdc7c637b5d8dc45004d801a3b8f041df341e29ca3b81" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.796791 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-jz5sp" Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.800939 4695 generic.go:334] "Generic (PLEG): container finished" podID="4c86a814-b63d-48d8-98de-6f560864c876" containerID="bb340cb4226b6ec4bc5986974502f05bafbba9f1eefd12bb5e6ef6fc8f07958c" exitCode=0 Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.800993 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c5874" event={"ID":"4c86a814-b63d-48d8-98de-6f560864c876","Type":"ContainerDied","Data":"bb340cb4226b6ec4bc5986974502f05bafbba9f1eefd12bb5e6ef6fc8f07958c"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.812768 4695 generic.go:334] "Generic (PLEG): container finished" podID="d572e570-4517-4440-9226-7c432f0e318c" containerID="1146cbdefb024635c96fcb30c9a43949d0daf6fad215a9e9dfb8d21d690e8efd" exitCode=0 Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.812978 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k62d7" event={"ID":"d572e570-4517-4440-9226-7c432f0e318c","Type":"ContainerDied","Data":"1146cbdefb024635c96fcb30c9a43949d0daf6fad215a9e9dfb8d21d690e8efd"} Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.824681 4695 generic.go:334] "Generic (PLEG): container finished" podID="9c693edf-bd03-49b1-b26e-0bca3bae1cf0" containerID="e2388237a821d99f67bfbb33aaf6d6c456ee0ac5670d300c389d6105726591b5" exitCode=0 Nov 26 13:43:36 crc kubenswrapper[4695]: I1126 13:43:36.824745 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e1ab-account-create-update-tqhkx" event={"ID":"9c693edf-bd03-49b1-b26e-0bca3bae1cf0","Type":"ContainerDied","Data":"e2388237a821d99f67bfbb33aaf6d6c456ee0ac5670d300c389d6105726591b5"} Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.242533 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-jj9bb"] Nov 26 13:43:37 crc kubenswrapper[4695]: E1126 13:43:37.243333 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" containerName="ovn-config" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.243373 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" containerName="ovn-config" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.243615 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" containerName="ovn-config" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.244387 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.246864 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.247048 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.247200 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-55hqz" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.247336 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.251229 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jj9bb"] Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.360754 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-zvx8d" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.372374 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-combined-ca-bundle\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.372417 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-config-data\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.372437 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsdlc\" (UniqueName: \"kubernetes.io/projected/e91bc2f0-eaf1-4a68-9135-af44285dd833-kube-api-access-rsdlc\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.442233 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-xktkp"] Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.443284 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.450888 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gwvqk" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.451001 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.454706 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xktkp"] Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.473985 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-config-data\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.474022 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn8jj\" (UniqueName: \"kubernetes.io/projected/12ef57bb-7a18-4350-a2b8-86efd6babbe0-kube-api-access-gn8jj\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.474091 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-combined-ca-bundle\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.474108 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-db-sync-config-data\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.474138 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-combined-ca-bundle\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.474172 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-config-data\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.474193 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsdlc\" (UniqueName: \"kubernetes.io/projected/e91bc2f0-eaf1-4a68-9135-af44285dd833-kube-api-access-rsdlc\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.485198 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-combined-ca-bundle\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.486759 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-config-data\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.497430 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsdlc\" (UniqueName: \"kubernetes.io/projected/e91bc2f0-eaf1-4a68-9135-af44285dd833-kube-api-access-rsdlc\") pod \"keystone-db-sync-jj9bb\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.500250 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zvx8d-config-jz5sp"] Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.515202 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zvx8d-config-jz5sp"] Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.575166 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-zvx8d-config-65k99"] Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.575727 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-config-data\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.575788 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn8jj\" (UniqueName: \"kubernetes.io/projected/12ef57bb-7a18-4350-a2b8-86efd6babbe0-kube-api-access-gn8jj\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.575854 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-db-sync-config-data\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.575877 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-combined-ca-bundle\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.576729 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.578658 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.582467 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.585880 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-combined-ca-bundle\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.586762 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-config-data\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.587138 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-db-sync-config-data\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.601335 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvx8d-config-65k99"] Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.618043 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn8jj\" (UniqueName: \"kubernetes.io/projected/12ef57bb-7a18-4350-a2b8-86efd6babbe0-kube-api-access-gn8jj\") pod \"glance-db-sync-xktkp\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.677349 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.677456 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmxj\" (UniqueName: \"kubernetes.io/projected/512a1307-48cd-4ac1-96df-8b2ce240eb91-kube-api-access-qpmxj\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.677493 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-scripts\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.677542 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-log-ovn\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.677570 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run-ovn\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.677588 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-additional-scripts\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.762326 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xktkp" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.779213 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmxj\" (UniqueName: \"kubernetes.io/projected/512a1307-48cd-4ac1-96df-8b2ce240eb91-kube-api-access-qpmxj\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.779284 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-scripts\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.779311 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-log-ovn\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.779332 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run-ovn\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.779428 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-additional-scripts\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.779511 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.779826 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.780524 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run-ovn\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.780574 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-log-ovn\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.781287 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-additional-scripts\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.782863 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-scripts\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.802892 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmxj\" (UniqueName: \"kubernetes.io/projected/512a1307-48cd-4ac1-96df-8b2ce240eb91-kube-api-access-qpmxj\") pod \"ovn-controller-zvx8d-config-65k99\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:37 crc kubenswrapper[4695]: I1126 13:43:37.836383 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"c2f287a3d8f5ac812025837940b9fa0bd0adab02783cfc958b75dfb43f7c035d"} Nov 26 13:43:38 crc kubenswrapper[4695]: I1126 13:43:38.000548 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:38 crc kubenswrapper[4695]: I1126 13:43:38.078575 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-jj9bb"] Nov 26 13:43:38 crc kubenswrapper[4695]: I1126 13:43:38.846069 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jj9bb" event={"ID":"e91bc2f0-eaf1-4a68-9135-af44285dd833","Type":"ContainerStarted","Data":"8519d8909a4b63709e882c1c9f90b65925fa16d2bc017ec428e25c4770ad3bf7"} Nov 26 13:43:38 crc kubenswrapper[4695]: I1126 13:43:38.850911 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"c6be3d718a39ef70be4730bb7aad4e648e299dde24232db7012e724949beafba"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.177962 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ad11a4c-505b-4cba-b70c-5f745ffdfd0c" path="/var/lib/kubelet/pods/5ad11a4c-505b-4cba-b70c-5f745ffdfd0c/volumes" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.365256 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.372034 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.388514 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.413233 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.415713 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sf56\" (UniqueName: \"kubernetes.io/projected/d572e570-4517-4440-9226-7c432f0e318c-kube-api-access-4sf56\") pod \"d572e570-4517-4440-9226-7c432f0e318c\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.415853 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-operator-scripts\") pod \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.415888 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4c9h\" (UniqueName: \"kubernetes.io/projected/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-kube-api-access-w4c9h\") pod \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\" (UID: \"9c693edf-bd03-49b1-b26e-0bca3bae1cf0\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.415985 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d572e570-4517-4440-9226-7c432f0e318c-operator-scripts\") pod \"d572e570-4517-4440-9226-7c432f0e318c\" (UID: \"d572e570-4517-4440-9226-7c432f0e318c\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.417258 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9c693edf-bd03-49b1-b26e-0bca3bae1cf0" (UID: "9c693edf-bd03-49b1-b26e-0bca3bae1cf0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.417526 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d572e570-4517-4440-9226-7c432f0e318c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d572e570-4517-4440-9226-7c432f0e318c" (UID: "d572e570-4517-4440-9226-7c432f0e318c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.425752 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d572e570-4517-4440-9226-7c432f0e318c-kube-api-access-4sf56" (OuterVolumeSpecName: "kube-api-access-4sf56") pod "d572e570-4517-4440-9226-7c432f0e318c" (UID: "d572e570-4517-4440-9226-7c432f0e318c"). InnerVolumeSpecName "kube-api-access-4sf56". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.427533 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.432146 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-kube-api-access-w4c9h" (OuterVolumeSpecName: "kube-api-access-w4c9h") pod "9c693edf-bd03-49b1-b26e-0bca3bae1cf0" (UID: "9c693edf-bd03-49b1-b26e-0bca3bae1cf0"). InnerVolumeSpecName "kube-api-access-w4c9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.438395 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5874" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.466684 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-xktkp"] Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519617 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rn25\" (UniqueName: \"kubernetes.io/projected/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-kube-api-access-5rn25\") pod \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519666 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4820ce3c-a324-4d56-a27a-9dd2695e286f-operator-scripts\") pod \"4820ce3c-a324-4d56-a27a-9dd2695e286f\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519746 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvft7\" (UniqueName: \"kubernetes.io/projected/4c86a814-b63d-48d8-98de-6f560864c876-kube-api-access-fvft7\") pod \"4c86a814-b63d-48d8-98de-6f560864c876\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519763 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-operator-scripts\") pod \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\" (UID: \"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519786 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rksmz\" (UniqueName: \"kubernetes.io/projected/4820ce3c-a324-4d56-a27a-9dd2695e286f-kube-api-access-rksmz\") pod \"4820ce3c-a324-4d56-a27a-9dd2695e286f\" (UID: \"4820ce3c-a324-4d56-a27a-9dd2695e286f\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519835 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-operator-scripts\") pod \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519888 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c86a814-b63d-48d8-98de-6f560864c876-operator-scripts\") pod \"4c86a814-b63d-48d8-98de-6f560864c876\" (UID: \"4c86a814-b63d-48d8-98de-6f560864c876\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.519978 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5h4j\" (UniqueName: \"kubernetes.io/projected/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-kube-api-access-g5h4j\") pod \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\" (UID: \"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d\") " Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.521981 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d" (UID: "a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.523027 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sf56\" (UniqueName: \"kubernetes.io/projected/d572e570-4517-4440-9226-7c432f0e318c-kube-api-access-4sf56\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.523080 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.523094 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4c9h\" (UniqueName: \"kubernetes.io/projected/9c693edf-bd03-49b1-b26e-0bca3bae1cf0-kube-api-access-w4c9h\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.523105 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d572e570-4517-4440-9226-7c432f0e318c-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.523115 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.524773 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4820ce3c-a324-4d56-a27a-9dd2695e286f-kube-api-access-rksmz" (OuterVolumeSpecName: "kube-api-access-rksmz") pod "4820ce3c-a324-4d56-a27a-9dd2695e286f" (UID: "4820ce3c-a324-4d56-a27a-9dd2695e286f"). InnerVolumeSpecName "kube-api-access-rksmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.525533 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-kube-api-access-5rn25" (OuterVolumeSpecName: "kube-api-access-5rn25") pod "b26d39e6-265d-44a3-a6f3-ca353fa4d0a9" (UID: "b26d39e6-265d-44a3-a6f3-ca353fa4d0a9"). InnerVolumeSpecName "kube-api-access-5rn25". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.525659 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c86a814-b63d-48d8-98de-6f560864c876-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c86a814-b63d-48d8-98de-6f560864c876" (UID: "4c86a814-b63d-48d8-98de-6f560864c876"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.525748 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-kube-api-access-g5h4j" (OuterVolumeSpecName: "kube-api-access-g5h4j") pod "a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d" (UID: "a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d"). InnerVolumeSpecName "kube-api-access-g5h4j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.526136 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4820ce3c-a324-4d56-a27a-9dd2695e286f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4820ce3c-a324-4d56-a27a-9dd2695e286f" (UID: "4820ce3c-a324-4d56-a27a-9dd2695e286f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.533791 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b26d39e6-265d-44a3-a6f3-ca353fa4d0a9" (UID: "b26d39e6-265d-44a3-a6f3-ca353fa4d0a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.536506 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c86a814-b63d-48d8-98de-6f560864c876-kube-api-access-fvft7" (OuterVolumeSpecName: "kube-api-access-fvft7") pod "4c86a814-b63d-48d8-98de-6f560864c876" (UID: "4c86a814-b63d-48d8-98de-6f560864c876"). InnerVolumeSpecName "kube-api-access-fvft7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:39 crc kubenswrapper[4695]: W1126 13:43:39.538107 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod512a1307_48cd_4ac1_96df_8b2ce240eb91.slice/crio-4f89acafb2a36c10ee702135b5c1c3acdde831b8f8afc0a326cc7ae74c3796a4 WatchSource:0}: Error finding container 4f89acafb2a36c10ee702135b5c1c3acdde831b8f8afc0a326cc7ae74c3796a4: Status 404 returned error can't find the container with id 4f89acafb2a36c10ee702135b5c1c3acdde831b8f8afc0a326cc7ae74c3796a4 Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.547375 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-zvx8d-config-65k99"] Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.625156 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5h4j\" (UniqueName: \"kubernetes.io/projected/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d-kube-api-access-g5h4j\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.625190 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rn25\" (UniqueName: \"kubernetes.io/projected/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-kube-api-access-5rn25\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.625204 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4820ce3c-a324-4d56-a27a-9dd2695e286f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.625216 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvft7\" (UniqueName: \"kubernetes.io/projected/4c86a814-b63d-48d8-98de-6f560864c876-kube-api-access-fvft7\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.625228 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.625241 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rksmz\" (UniqueName: \"kubernetes.io/projected/4820ce3c-a324-4d56-a27a-9dd2695e286f-kube-api-access-rksmz\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.625287 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c86a814-b63d-48d8-98de-6f560864c876-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.862410 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xktkp" event={"ID":"12ef57bb-7a18-4350-a2b8-86efd6babbe0","Type":"ContainerStarted","Data":"63a605cec1d87ce2a174da156af8e9128f8af6fa7fc2c4f883a46982b94469dd"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.864483 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8be9-account-create-update-shnqq" event={"ID":"4820ce3c-a324-4d56-a27a-9dd2695e286f","Type":"ContainerDied","Data":"0d88de6249a4d7f97603706af4c6b57ebf090de4b615bc1aff505126fb8b47cd"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.864515 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d88de6249a4d7f97603706af4c6b57ebf090de4b615bc1aff505126fb8b47cd" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.864565 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8be9-account-create-update-shnqq" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.870676 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-k62d7" event={"ID":"d572e570-4517-4440-9226-7c432f0e318c","Type":"ContainerDied","Data":"8347442813de53f49dd972ae95d5d11e77eb0227cb391dce893b658cbf9c75fb"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.870713 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8347442813de53f49dd972ae95d5d11e77eb0227cb391dce893b658cbf9c75fb" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.870743 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-k62d7" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.872211 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-9d5a-account-create-update-h8n7j" event={"ID":"a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d","Type":"ContainerDied","Data":"7d60a89805d592f8e5bcd483abda88e6460bf927d2128162c67e8338169b6d7b"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.872238 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d60a89805d592f8e5bcd483abda88e6460bf927d2128162c67e8338169b6d7b" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.872307 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-9d5a-account-create-update-h8n7j" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.877687 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"2af01c298a66dedf75aee2a615d00fc62468fd671382b29886689ae44e63200e"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.877727 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"4edd603de74e1b90f2642a3a5ebb2be289adbccb565810193a259757db3c0f03"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.877738 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"687c299e570f2b636c8c7df4f8a9615cebf36c1c3541a9ea1ba50136234b2235"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.880410 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-n95vb" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.880521 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-n95vb" event={"ID":"b26d39e6-265d-44a3-a6f3-ca353fa4d0a9","Type":"ContainerDied","Data":"d7b2197b342eeba906f9be9e3efb0424cd313ce75e9241831aafb8e8f4d47f90"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.880555 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7b2197b342eeba906f9be9e3efb0424cd313ce75e9241831aafb8e8f4d47f90" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.883001 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-c5874" event={"ID":"4c86a814-b63d-48d8-98de-6f560864c876","Type":"ContainerDied","Data":"5189452eeab11f8b4ea70c056827afa5f6354b5c214621175d72a50adc81edf6"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.883024 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5189452eeab11f8b4ea70c056827afa5f6354b5c214621175d72a50adc81edf6" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.883026 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-c5874" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.885569 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e1ab-account-create-update-tqhkx" event={"ID":"9c693edf-bd03-49b1-b26e-0bca3bae1cf0","Type":"ContainerDied","Data":"28577305bb893a1526ed3e9c26f70bacfa9fa1423472d963b582a721d61edf9a"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.885589 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28577305bb893a1526ed3e9c26f70bacfa9fa1423472d963b582a721d61edf9a" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.885654 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e1ab-account-create-update-tqhkx" Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.893415 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-65k99" event={"ID":"512a1307-48cd-4ac1-96df-8b2ce240eb91","Type":"ContainerStarted","Data":"6f2aa02fe29dad19cf15659a8abdfbde73ae603d00446fc255df2414c759783e"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.893460 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-65k99" event={"ID":"512a1307-48cd-4ac1-96df-8b2ce240eb91","Type":"ContainerStarted","Data":"4f89acafb2a36c10ee702135b5c1c3acdde831b8f8afc0a326cc7ae74c3796a4"} Nov 26 13:43:39 crc kubenswrapper[4695]: I1126 13:43:39.910019 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-zvx8d-config-65k99" podStartSLOduration=2.910001443 podStartE2EDuration="2.910001443s" podCreationTimestamp="2025-11-26 13:43:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:43:39.9080361 +0000 UTC m=+1203.543861182" watchObservedRunningTime="2025-11-26 13:43:39.910001443 +0000 UTC m=+1203.545826525" Nov 26 13:43:40 crc kubenswrapper[4695]: I1126 13:43:40.905837 4695 generic.go:334] "Generic (PLEG): container finished" podID="512a1307-48cd-4ac1-96df-8b2ce240eb91" containerID="6f2aa02fe29dad19cf15659a8abdfbde73ae603d00446fc255df2414c759783e" exitCode=0 Nov 26 13:43:40 crc kubenswrapper[4695]: I1126 13:43:40.905877 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-65k99" event={"ID":"512a1307-48cd-4ac1-96df-8b2ce240eb91","Type":"ContainerDied","Data":"6f2aa02fe29dad19cf15659a8abdfbde73ae603d00446fc255df2414c759783e"} Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.787976 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.830468 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run\") pod \"512a1307-48cd-4ac1-96df-8b2ce240eb91\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.833728 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-additional-scripts\") pod \"512a1307-48cd-4ac1-96df-8b2ce240eb91\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.833875 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-scripts\") pod \"512a1307-48cd-4ac1-96df-8b2ce240eb91\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.833967 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpmxj\" (UniqueName: \"kubernetes.io/projected/512a1307-48cd-4ac1-96df-8b2ce240eb91-kube-api-access-qpmxj\") pod \"512a1307-48cd-4ac1-96df-8b2ce240eb91\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.834169 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-log-ovn\") pod \"512a1307-48cd-4ac1-96df-8b2ce240eb91\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.830577 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run" (OuterVolumeSpecName: "var-run") pod "512a1307-48cd-4ac1-96df-8b2ce240eb91" (UID: "512a1307-48cd-4ac1-96df-8b2ce240eb91"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.834376 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "512a1307-48cd-4ac1-96df-8b2ce240eb91" (UID: "512a1307-48cd-4ac1-96df-8b2ce240eb91"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.834484 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run-ovn\") pod \"512a1307-48cd-4ac1-96df-8b2ce240eb91\" (UID: \"512a1307-48cd-4ac1-96df-8b2ce240eb91\") " Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.835238 4695 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.835260 4695 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.835301 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "512a1307-48cd-4ac1-96df-8b2ce240eb91" (UID: "512a1307-48cd-4ac1-96df-8b2ce240eb91"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.835549 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "512a1307-48cd-4ac1-96df-8b2ce240eb91" (UID: "512a1307-48cd-4ac1-96df-8b2ce240eb91"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.836709 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-scripts" (OuterVolumeSpecName: "scripts") pod "512a1307-48cd-4ac1-96df-8b2ce240eb91" (UID: "512a1307-48cd-4ac1-96df-8b2ce240eb91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.844722 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/512a1307-48cd-4ac1-96df-8b2ce240eb91-kube-api-access-qpmxj" (OuterVolumeSpecName: "kube-api-access-qpmxj") pod "512a1307-48cd-4ac1-96df-8b2ce240eb91" (UID: "512a1307-48cd-4ac1-96df-8b2ce240eb91"). InnerVolumeSpecName "kube-api-access-qpmxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.937857 4695 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/512a1307-48cd-4ac1-96df-8b2ce240eb91-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.937896 4695 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.937906 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/512a1307-48cd-4ac1-96df-8b2ce240eb91-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.937914 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpmxj\" (UniqueName: \"kubernetes.io/projected/512a1307-48cd-4ac1-96df-8b2ce240eb91-kube-api-access-qpmxj\") on node \"crc\" DevicePath \"\"" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.945703 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-zvx8d-config-65k99" event={"ID":"512a1307-48cd-4ac1-96df-8b2ce240eb91","Type":"ContainerDied","Data":"4f89acafb2a36c10ee702135b5c1c3acdde831b8f8afc0a326cc7ae74c3796a4"} Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.945756 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f89acafb2a36c10ee702135b5c1c3acdde831b8f8afc0a326cc7ae74c3796a4" Nov 26 13:43:44 crc kubenswrapper[4695]: I1126 13:43:44.945827 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-zvx8d-config-65k99" Nov 26 13:43:45 crc kubenswrapper[4695]: I1126 13:43:45.882413 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-zvx8d-config-65k99"] Nov 26 13:43:45 crc kubenswrapper[4695]: I1126 13:43:45.889272 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-zvx8d-config-65k99"] Nov 26 13:43:45 crc kubenswrapper[4695]: I1126 13:43:45.967329 4695 scope.go:117] "RemoveContainer" containerID="6baeb7439c5485dcfc060eb87e94e85c12b6344dfd41fa46e2a081c414417795" Nov 26 13:43:47 crc kubenswrapper[4695]: I1126 13:43:47.179273 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="512a1307-48cd-4ac1-96df-8b2ce240eb91" path="/var/lib/kubelet/pods/512a1307-48cd-4ac1-96df-8b2ce240eb91/volumes" Nov 26 13:43:53 crc kubenswrapper[4695]: I1126 13:43:53.524610 4695 scope.go:117] "RemoveContainer" containerID="3e8c0c5188069f5512aff92cb8c0decab0659788e0b96efece1d1a62a77fbc1f" Nov 26 13:43:53 crc kubenswrapper[4695]: E1126 13:43:53.622194 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Nov 26 13:43:53 crc kubenswrapper[4695]: E1126 13:43:53.622483 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gn8jj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-xktkp_openstack(12ef57bb-7a18-4350-a2b8-86efd6babbe0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:43:53 crc kubenswrapper[4695]: E1126 13:43:53.623680 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-xktkp" podUID="12ef57bb-7a18-4350-a2b8-86efd6babbe0" Nov 26 13:43:54 crc kubenswrapper[4695]: E1126 13:43:54.030405 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-xktkp" podUID="12ef57bb-7a18-4350-a2b8-86efd6babbe0" Nov 26 13:43:55 crc kubenswrapper[4695]: I1126 13:43:55.034049 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jj9bb" event={"ID":"e91bc2f0-eaf1-4a68-9135-af44285dd833","Type":"ContainerStarted","Data":"cadcc79f818d6323d2963027de9a3ea4e927c33c915b4c4c4ceede6dc3132e74"} Nov 26 13:43:55 crc kubenswrapper[4695]: I1126 13:43:55.038037 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"6017aec1e1ebc9cf0b732f969a9770b0718d300223f7a4b638aafef595f9ce69"} Nov 26 13:43:55 crc kubenswrapper[4695]: I1126 13:43:55.038073 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"abc6255842a350a86d9bd847f6f0599393be738c8a6cf3dcc970d4b88da7ca37"} Nov 26 13:43:55 crc kubenswrapper[4695]: I1126 13:43:55.054271 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-jj9bb" podStartSLOduration=2.620351739 podStartE2EDuration="18.054256s" podCreationTimestamp="2025-11-26 13:43:37 +0000 UTC" firstStartedPulling="2025-11-26 13:43:38.095115866 +0000 UTC m=+1201.730940948" lastFinishedPulling="2025-11-26 13:43:53.529020117 +0000 UTC m=+1217.164845209" observedRunningTime="2025-11-26 13:43:55.046732451 +0000 UTC m=+1218.682557533" watchObservedRunningTime="2025-11-26 13:43:55.054256 +0000 UTC m=+1218.690081082" Nov 26 13:43:56 crc kubenswrapper[4695]: I1126 13:43:56.051027 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"d91494b7fb6fa7d074852a131ca4845e766800325f50906b9fa04c158d358dda"} Nov 26 13:43:56 crc kubenswrapper[4695]: I1126 13:43:56.051410 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"6fa838a96cf63baf5d8ad7fd9093e7c69150ab087b49bc4add14e28d751ce8b8"} Nov 26 13:43:56 crc kubenswrapper[4695]: I1126 13:43:56.051429 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"e576afa232e541cee81edc5bdceedddd440c8505c5994627a3f7705c7a2bedf9"} Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.061646 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"34a456802456f7d61cdbab089a0161c214896db3903c1c5e1976f7690559757f"} Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.061894 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b77b4e90-5d1a-4724-a57f-2ff4a394d434","Type":"ContainerStarted","Data":"d28e12e57c0353e3ded8addfa0a1f398ea98096878641f2af820e0f1616a24b1"} Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.096423 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.993506598 podStartE2EDuration="59.096405275s" podCreationTimestamp="2025-11-26 13:42:58 +0000 UTC" firstStartedPulling="2025-11-26 13:43:32.422412722 +0000 UTC m=+1196.058237804" lastFinishedPulling="2025-11-26 13:43:53.525311389 +0000 UTC m=+1217.161136481" observedRunningTime="2025-11-26 13:43:57.089096802 +0000 UTC m=+1220.724921904" watchObservedRunningTime="2025-11-26 13:43:57.096405275 +0000 UTC m=+1220.732230357" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340259 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-cbgr4"] Nov 26 13:43:57 crc kubenswrapper[4695]: E1126 13:43:57.340672 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c693edf-bd03-49b1-b26e-0bca3bae1cf0" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340684 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c693edf-bd03-49b1-b26e-0bca3bae1cf0" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: E1126 13:43:57.340696 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d572e570-4517-4440-9226-7c432f0e318c" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340703 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d572e570-4517-4440-9226-7c432f0e318c" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: E1126 13:43:57.340722 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="512a1307-48cd-4ac1-96df-8b2ce240eb91" containerName="ovn-config" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340728 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="512a1307-48cd-4ac1-96df-8b2ce240eb91" containerName="ovn-config" Nov 26 13:43:57 crc kubenswrapper[4695]: E1126 13:43:57.340740 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340746 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: E1126 13:43:57.340755 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26d39e6-265d-44a3-a6f3-ca353fa4d0a9" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340760 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26d39e6-265d-44a3-a6f3-ca353fa4d0a9" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: E1126 13:43:57.340767 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4820ce3c-a324-4d56-a27a-9dd2695e286f" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340774 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4820ce3c-a324-4d56-a27a-9dd2695e286f" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: E1126 13:43:57.340788 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c86a814-b63d-48d8-98de-6f560864c876" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340794 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c86a814-b63d-48d8-98de-6f560864c876" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340937 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340955 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d572e570-4517-4440-9226-7c432f0e318c" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340972 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="4820ce3c-a324-4d56-a27a-9dd2695e286f" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340981 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="512a1307-48cd-4ac1-96df-8b2ce240eb91" containerName="ovn-config" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.340991 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c86a814-b63d-48d8-98de-6f560864c876" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.341000 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c693edf-bd03-49b1-b26e-0bca3bae1cf0" containerName="mariadb-account-create-update" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.341012 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26d39e6-265d-44a3-a6f3-ca353fa4d0a9" containerName="mariadb-database-create" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.341797 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.346686 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.355911 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-config\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.356000 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.356042 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.356103 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.356151 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.356174 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzclk\" (UniqueName: \"kubernetes.io/projected/44541725-82b5-41bc-b51b-a3e624eb84e6-kube-api-access-xzclk\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.382318 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-cbgr4"] Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.457596 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.457656 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.457679 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzclk\" (UniqueName: \"kubernetes.io/projected/44541725-82b5-41bc-b51b-a3e624eb84e6-kube-api-access-xzclk\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.457726 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-config\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.457764 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.457786 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.458492 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.458542 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.458654 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.459234 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.459310 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-config\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.481573 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzclk\" (UniqueName: \"kubernetes.io/projected/44541725-82b5-41bc-b51b-a3e624eb84e6-kube-api-access-xzclk\") pod \"dnsmasq-dns-6d5b6d6b67-cbgr4\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:57 crc kubenswrapper[4695]: I1126 13:43:57.661399 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:43:58 crc kubenswrapper[4695]: I1126 13:43:58.098424 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-cbgr4"] Nov 26 13:43:59 crc kubenswrapper[4695]: I1126 13:43:59.078378 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" event={"ID":"44541725-82b5-41bc-b51b-a3e624eb84e6","Type":"ContainerStarted","Data":"82fc2f69b0f8016224509d5c9d82124946758e724e94d4ea827d344d29b80400"} Nov 26 13:43:59 crc kubenswrapper[4695]: I1126 13:43:59.079940 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" event={"ID":"44541725-82b5-41bc-b51b-a3e624eb84e6","Type":"ContainerStarted","Data":"1bec86f26310ed0f055aa92bbef2344f16f6c1aa8d93f8725975c9ba0ac24b47"} Nov 26 13:44:00 crc kubenswrapper[4695]: I1126 13:44:00.089584 4695 generic.go:334] "Generic (PLEG): container finished" podID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerID="82fc2f69b0f8016224509d5c9d82124946758e724e94d4ea827d344d29b80400" exitCode=0 Nov 26 13:44:00 crc kubenswrapper[4695]: I1126 13:44:00.089864 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" event={"ID":"44541725-82b5-41bc-b51b-a3e624eb84e6","Type":"ContainerDied","Data":"82fc2f69b0f8016224509d5c9d82124946758e724e94d4ea827d344d29b80400"} Nov 26 13:44:00 crc kubenswrapper[4695]: I1126 13:44:00.089985 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" event={"ID":"44541725-82b5-41bc-b51b-a3e624eb84e6","Type":"ContainerStarted","Data":"dfab0d4255935b5fe7d3beaac29565c701f8156baca086cb489167173f5f52df"} Nov 26 13:44:00 crc kubenswrapper[4695]: I1126 13:44:00.090007 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:44:00 crc kubenswrapper[4695]: I1126 13:44:00.119258 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" podStartSLOduration=3.119239931 podStartE2EDuration="3.119239931s" podCreationTimestamp="2025-11-26 13:43:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:44:00.11639573 +0000 UTC m=+1223.752220842" watchObservedRunningTime="2025-11-26 13:44:00.119239931 +0000 UTC m=+1223.755065013" Nov 26 13:44:08 crc kubenswrapper[4695]: I1126 13:44:07.992780 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:44:08 crc kubenswrapper[4695]: I1126 13:44:08.067125 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-nqfxz"] Nov 26 13:44:08 crc kubenswrapper[4695]: I1126 13:44:08.067414 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerName="dnsmasq-dns" containerID="cri-o://1ec3a3ed277498c6dad65878a193b671b5cbb0133ab9f6a452bf705ce487f768" gracePeriod=10 Nov 26 13:44:09 crc kubenswrapper[4695]: I1126 13:44:09.176147 4695 generic.go:334] "Generic (PLEG): container finished" podID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerID="1ec3a3ed277498c6dad65878a193b671b5cbb0133ab9f6a452bf705ce487f768" exitCode=0 Nov 26 13:44:09 crc kubenswrapper[4695]: I1126 13:44:09.178473 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" event={"ID":"6fb1a7cc-9253-4702-b8ff-9b2daa077c96","Type":"ContainerDied","Data":"1ec3a3ed277498c6dad65878a193b671b5cbb0133ab9f6a452bf705ce487f768"} Nov 26 13:44:12 crc kubenswrapper[4695]: I1126 13:44:12.079425 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.112:5353: connect: connection refused" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.594958 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.736699 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-sb\") pod \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.736867 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-dns-svc\") pod \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.736967 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kjn4\" (UniqueName: \"kubernetes.io/projected/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-kube-api-access-4kjn4\") pod \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.737577 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-nb\") pod \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.737643 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-config\") pod \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\" (UID: \"6fb1a7cc-9253-4702-b8ff-9b2daa077c96\") " Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.741533 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-kube-api-access-4kjn4" (OuterVolumeSpecName: "kube-api-access-4kjn4") pod "6fb1a7cc-9253-4702-b8ff-9b2daa077c96" (UID: "6fb1a7cc-9253-4702-b8ff-9b2daa077c96"). InnerVolumeSpecName "kube-api-access-4kjn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.776633 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fb1a7cc-9253-4702-b8ff-9b2daa077c96" (UID: "6fb1a7cc-9253-4702-b8ff-9b2daa077c96"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.776723 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fb1a7cc-9253-4702-b8ff-9b2daa077c96" (UID: "6fb1a7cc-9253-4702-b8ff-9b2daa077c96"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.778830 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-config" (OuterVolumeSpecName: "config") pod "6fb1a7cc-9253-4702-b8ff-9b2daa077c96" (UID: "6fb1a7cc-9253-4702-b8ff-9b2daa077c96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.783438 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fb1a7cc-9253-4702-b8ff-9b2daa077c96" (UID: "6fb1a7cc-9253-4702-b8ff-9b2daa077c96"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.839848 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kjn4\" (UniqueName: \"kubernetes.io/projected/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-kube-api-access-4kjn4\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.839887 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.839898 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.839906 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:13 crc kubenswrapper[4695]: I1126 13:44:13.839914 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fb1a7cc-9253-4702-b8ff-9b2daa077c96-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:14 crc kubenswrapper[4695]: I1126 13:44:14.252868 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" event={"ID":"6fb1a7cc-9253-4702-b8ff-9b2daa077c96","Type":"ContainerDied","Data":"b039d5fd7b0a39ab2c22e8bfc77786f8264f012154a51b0e49e63751d38bcd35"} Nov 26 13:44:14 crc kubenswrapper[4695]: I1126 13:44:14.253841 4695 scope.go:117] "RemoveContainer" containerID="1ec3a3ed277498c6dad65878a193b671b5cbb0133ab9f6a452bf705ce487f768" Nov 26 13:44:14 crc kubenswrapper[4695]: I1126 13:44:14.252941 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-nqfxz" Nov 26 13:44:14 crc kubenswrapper[4695]: I1126 13:44:14.255991 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xktkp" event={"ID":"12ef57bb-7a18-4350-a2b8-86efd6babbe0","Type":"ContainerStarted","Data":"c5836f93af7c729db751b5363c438a5cca7867eb9aa3c91fa8795af36f4a29be"} Nov 26 13:44:14 crc kubenswrapper[4695]: I1126 13:44:14.290678 4695 scope.go:117] "RemoveContainer" containerID="ba9f980452abca66cb826d0c4f166d7eacd3f607723da26468f27cf111921afe" Nov 26 13:44:14 crc kubenswrapper[4695]: I1126 13:44:14.295456 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-nqfxz"] Nov 26 13:44:14 crc kubenswrapper[4695]: I1126 13:44:14.301628 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-nqfxz"] Nov 26 13:44:15 crc kubenswrapper[4695]: I1126 13:44:15.171889 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" path="/var/lib/kubelet/pods/6fb1a7cc-9253-4702-b8ff-9b2daa077c96/volumes" Nov 26 13:44:15 crc kubenswrapper[4695]: I1126 13:44:15.312188 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-xktkp" podStartSLOduration=4.572981426 podStartE2EDuration="38.312162275s" podCreationTimestamp="2025-11-26 13:43:37 +0000 UTC" firstStartedPulling="2025-11-26 13:43:39.48127315 +0000 UTC m=+1203.117098232" lastFinishedPulling="2025-11-26 13:44:13.220453949 +0000 UTC m=+1236.856279081" observedRunningTime="2025-11-26 13:44:15.302551876 +0000 UTC m=+1238.938376978" watchObservedRunningTime="2025-11-26 13:44:15.312162275 +0000 UTC m=+1238.947987367" Nov 26 13:44:29 crc kubenswrapper[4695]: I1126 13:44:29.400320 4695 generic.go:334] "Generic (PLEG): container finished" podID="e91bc2f0-eaf1-4a68-9135-af44285dd833" containerID="cadcc79f818d6323d2963027de9a3ea4e927c33c915b4c4c4ceede6dc3132e74" exitCode=0 Nov 26 13:44:29 crc kubenswrapper[4695]: I1126 13:44:29.400417 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jj9bb" event={"ID":"e91bc2f0-eaf1-4a68-9135-af44285dd833","Type":"ContainerDied","Data":"cadcc79f818d6323d2963027de9a3ea4e927c33c915b4c4c4ceede6dc3132e74"} Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.690658 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.826462 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsdlc\" (UniqueName: \"kubernetes.io/projected/e91bc2f0-eaf1-4a68-9135-af44285dd833-kube-api-access-rsdlc\") pod \"e91bc2f0-eaf1-4a68-9135-af44285dd833\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.826662 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-combined-ca-bundle\") pod \"e91bc2f0-eaf1-4a68-9135-af44285dd833\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.826815 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-config-data\") pod \"e91bc2f0-eaf1-4a68-9135-af44285dd833\" (UID: \"e91bc2f0-eaf1-4a68-9135-af44285dd833\") " Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.834495 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91bc2f0-eaf1-4a68-9135-af44285dd833-kube-api-access-rsdlc" (OuterVolumeSpecName: "kube-api-access-rsdlc") pod "e91bc2f0-eaf1-4a68-9135-af44285dd833" (UID: "e91bc2f0-eaf1-4a68-9135-af44285dd833"). InnerVolumeSpecName "kube-api-access-rsdlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.864528 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e91bc2f0-eaf1-4a68-9135-af44285dd833" (UID: "e91bc2f0-eaf1-4a68-9135-af44285dd833"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.889965 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-config-data" (OuterVolumeSpecName: "config-data") pod "e91bc2f0-eaf1-4a68-9135-af44285dd833" (UID: "e91bc2f0-eaf1-4a68-9135-af44285dd833"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.929168 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.929223 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsdlc\" (UniqueName: \"kubernetes.io/projected/e91bc2f0-eaf1-4a68-9135-af44285dd833-kube-api-access-rsdlc\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:30 crc kubenswrapper[4695]: I1126 13:44:30.929238 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e91bc2f0-eaf1-4a68-9135-af44285dd833-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.417739 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-jj9bb" event={"ID":"e91bc2f0-eaf1-4a68-9135-af44285dd833","Type":"ContainerDied","Data":"8519d8909a4b63709e882c1c9f90b65925fa16d2bc017ec428e25c4770ad3bf7"} Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.417790 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8519d8909a4b63709e882c1c9f90b65925fa16d2bc017ec428e25c4770ad3bf7" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.417848 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-jj9bb" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.728448 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8ppvg"] Nov 26 13:44:31 crc kubenswrapper[4695]: E1126 13:44:31.729038 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerName="init" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.729049 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerName="init" Nov 26 13:44:31 crc kubenswrapper[4695]: E1126 13:44:31.729070 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91bc2f0-eaf1-4a68-9135-af44285dd833" containerName="keystone-db-sync" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.729076 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91bc2f0-eaf1-4a68-9135-af44285dd833" containerName="keystone-db-sync" Nov 26 13:44:31 crc kubenswrapper[4695]: E1126 13:44:31.729095 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerName="dnsmasq-dns" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.729101 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerName="dnsmasq-dns" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.729258 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91bc2f0-eaf1-4a68-9135-af44285dd833" containerName="keystone-db-sync" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.729272 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb1a7cc-9253-4702-b8ff-9b2daa077c96" containerName="dnsmasq-dns" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.730128 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.739162 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fb6l4"] Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.740256 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.742472 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.742492 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.742707 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.747650 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-55hqz" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.747911 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.750572 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8ppvg"] Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.770931 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fb6l4"] Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851638 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-config-data\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851691 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-fernet-keys\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851717 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-combined-ca-bundle\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851743 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851758 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-scripts\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851788 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851807 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851847 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7t4w\" (UniqueName: \"kubernetes.io/projected/839691bb-afa9-4142-ae17-518432ca3059-kube-api-access-h7t4w\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851877 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-credential-keys\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851893 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851926 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-449nm\" (UniqueName: \"kubernetes.io/projected/1f95654b-fe99-4a25-a066-97e65d8566c8-kube-api-access-449nm\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.851945 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-config\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.952922 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-zzx52"] Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954682 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-config-data\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954733 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-fernet-keys\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954757 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-combined-ca-bundle\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954782 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-scripts\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954798 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954819 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954877 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954907 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7t4w\" (UniqueName: \"kubernetes.io/projected/839691bb-afa9-4142-ae17-518432ca3059-kube-api-access-h7t4w\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954935 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-credential-keys\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954951 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.954984 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-449nm\" (UniqueName: \"kubernetes.io/projected/1f95654b-fe99-4a25-a066-97e65d8566c8-kube-api-access-449nm\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.955005 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-config\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.961805 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.962773 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.966226 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.967830 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.969001 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-config\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.971801 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-combined-ca-bundle\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.978926 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-credential-keys\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.979406 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-fernet-keys\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.987239 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-scripts\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.989636 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.995275 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-config-data\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.995335 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-568d6b79b5-59hdk"] Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.997254 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-t6qg9" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.997398 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 26 13:44:31 crc kubenswrapper[4695]: I1126 13:44:31.997505 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.000216 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-449nm\" (UniqueName: \"kubernetes.io/projected/1f95654b-fe99-4a25-a066-97e65d8566c8-kube-api-access-449nm\") pod \"keystone-bootstrap-fb6l4\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.005377 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7t4w\" (UniqueName: \"kubernetes.io/projected/839691bb-afa9-4142-ae17-518432ca3059-kube-api-access-h7t4w\") pod \"dnsmasq-dns-6f8c45789f-8ppvg\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.008856 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.015551 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.031579 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.031634 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-w8g9k" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.031676 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.064467 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zzx52"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.065079 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.065802 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066111 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905c51dc-7482-4b8c-acb2-49f72df79646-logs\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066156 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6597360-8ab5-4bba-9137-fb4f57019c78-etc-machine-id\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066180 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hcjz\" (UniqueName: \"kubernetes.io/projected/b6597360-8ab5-4bba-9137-fb4f57019c78-kube-api-access-2hcjz\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066206 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qn58t\" (UniqueName: \"kubernetes.io/projected/905c51dc-7482-4b8c-acb2-49f72df79646-kube-api-access-qn58t\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066222 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-config-data\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066242 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-scripts\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066256 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-combined-ca-bundle\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066283 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-config-data\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066298 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905c51dc-7482-4b8c-acb2-49f72df79646-horizon-secret-key\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066316 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-db-sync-config-data\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.066369 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-scripts\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.089403 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-568d6b79b5-59hdk"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.157742 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-s5q69"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.158897 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168157 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-scripts\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168461 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905c51dc-7482-4b8c-acb2-49f72df79646-logs\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168557 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6597360-8ab5-4bba-9137-fb4f57019c78-etc-machine-id\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168632 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hcjz\" (UniqueName: \"kubernetes.io/projected/b6597360-8ab5-4bba-9137-fb4f57019c78-kube-api-access-2hcjz\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168715 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qn58t\" (UniqueName: \"kubernetes.io/projected/905c51dc-7482-4b8c-acb2-49f72df79646-kube-api-access-qn58t\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168657 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6597360-8ab5-4bba-9137-fb4f57019c78-etc-machine-id\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168780 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-config-data\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168924 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-scripts\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.168962 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-combined-ca-bundle\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.169031 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-config-data\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.169066 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905c51dc-7482-4b8c-acb2-49f72df79646-horizon-secret-key\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.169095 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-db-sync-config-data\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.169551 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b6jfc" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.169787 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-scripts\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.170684 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-config-data\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.170748 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.194694 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-scripts\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.200054 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-combined-ca-bundle\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.200379 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-86bf56c6d9-npgcp"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.200787 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-db-sync-config-data\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.201914 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.202078 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905c51dc-7482-4b8c-acb2-49f72df79646-logs\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.217915 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905c51dc-7482-4b8c-acb2-49f72df79646-horizon-secret-key\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.218714 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-config-data\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.221275 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qn58t\" (UniqueName: \"kubernetes.io/projected/905c51dc-7482-4b8c-acb2-49f72df79646-kube-api-access-qn58t\") pod \"horizon-568d6b79b5-59hdk\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.225255 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hcjz\" (UniqueName: \"kubernetes.io/projected/b6597360-8ab5-4bba-9137-fb4f57019c78-kube-api-access-2hcjz\") pod \"cinder-db-sync-zzx52\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.232432 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s5q69"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.265238 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.272953 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/851d72f2-c284-488f-a1f1-01a1728d5a18-horizon-secret-key\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.272987 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-scripts\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.273036 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851d72f2-c284-488f-a1f1-01a1728d5a18-logs\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.273072 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-db-sync-config-data\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.273106 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-config-data\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.273130 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7zq9\" (UniqueName: \"kubernetes.io/projected/851d72f2-c284-488f-a1f1-01a1728d5a18-kube-api-access-r7zq9\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.273156 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzvqm\" (UniqueName: \"kubernetes.io/projected/322aceb8-cfb2-478e-a586-68c3f43b3977-kube-api-access-mzvqm\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.273198 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-combined-ca-bundle\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.287623 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cc28g"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.288799 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.292013 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.292182 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pw8jw" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.293544 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.315314 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8ppvg"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.335647 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.337873 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.349672 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.349859 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.369693 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cc28g"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374145 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-scripts\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374184 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/851d72f2-c284-488f-a1f1-01a1728d5a18-horizon-secret-key\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374219 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851d72f2-c284-488f-a1f1-01a1728d5a18-logs\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374273 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-db-sync-config-data\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374302 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-combined-ca-bundle\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374329 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-config-data\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374420 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjgt5\" (UniqueName: \"kubernetes.io/projected/07c85245-6bcd-4580-a85f-51fa41122292-kube-api-access-zjgt5\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374449 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7zq9\" (UniqueName: \"kubernetes.io/projected/851d72f2-c284-488f-a1f1-01a1728d5a18-kube-api-access-r7zq9\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374471 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-config\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374488 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzvqm\" (UniqueName: \"kubernetes.io/projected/322aceb8-cfb2-478e-a586-68c3f43b3977-kube-api-access-mzvqm\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.374545 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-combined-ca-bundle\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.375789 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-scripts\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.376976 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851d72f2-c284-488f-a1f1-01a1728d5a18-logs\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.381873 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-combined-ca-bundle\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.382879 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/851d72f2-c284-488f-a1f1-01a1728d5a18-horizon-secret-key\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.386741 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-db-sync-config-data\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.394254 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-config-data\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.396642 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7zq9\" (UniqueName: \"kubernetes.io/projected/851d72f2-c284-488f-a1f1-01a1728d5a18-kube-api-access-r7zq9\") pod \"horizon-86bf56c6d9-npgcp\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.397400 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zzx52" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.399965 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86bf56c6d9-npgcp"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.404339 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzvqm\" (UniqueName: \"kubernetes.io/projected/322aceb8-cfb2-478e-a586-68c3f43b3977-kube-api-access-mzvqm\") pod \"barbican-db-sync-s5q69\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.422458 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.442567 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-gs5xh"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.444013 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.449124 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r6llz" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.449309 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.449442 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.451475 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-jjcrb"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.453018 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.475969 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjgt5\" (UniqueName: \"kubernetes.io/projected/07c85245-6bcd-4580-a85f-51fa41122292-kube-api-access-zjgt5\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476025 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-run-httpd\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476067 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-log-httpd\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476093 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-config\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476149 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-scripts\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476168 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-config-data\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476192 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476268 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grlfh\" (UniqueName: \"kubernetes.io/projected/a9c800b1-62f2-42d6-a64c-95a673861ebb-kube-api-access-grlfh\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476293 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.476313 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-combined-ca-bundle\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.478435 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gs5xh"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.481961 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-config\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.482801 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-combined-ca-bundle\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.492429 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-jjcrb"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.504819 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjgt5\" (UniqueName: \"kubernetes.io/projected/07c85245-6bcd-4580-a85f-51fa41122292-kube-api-access-zjgt5\") pod \"neutron-db-sync-cc28g\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.578994 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-combined-ca-bundle\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579039 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579064 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr5z6\" (UniqueName: \"kubernetes.io/projected/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-kube-api-access-jr5z6\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579085 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579144 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grlfh\" (UniqueName: \"kubernetes.io/projected/a9c800b1-62f2-42d6-a64c-95a673861ebb-kube-api-access-grlfh\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579475 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-config\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579566 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579592 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579644 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74573dd4-c899-4229-b940-e2f82063aa84-logs\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579676 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-run-httpd\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579724 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-log-httpd\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579744 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-scripts\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579802 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-257z7\" (UniqueName: \"kubernetes.io/projected/74573dd4-c899-4229-b940-e2f82063aa84-kube-api-access-257z7\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579851 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.579999 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-config-data\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.580104 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-scripts\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.580142 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-config-data\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.580193 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.580308 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-log-httpd\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.580409 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-run-httpd\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.584476 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-scripts\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.588831 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.593987 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-config-data\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.598684 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.599813 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grlfh\" (UniqueName: \"kubernetes.io/projected/a9c800b1-62f2-42d6-a64c-95a673861ebb-kube-api-access-grlfh\") pod \"ceilometer-0\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.608698 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s5q69" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.623437 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.659053 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cc28g" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.670181 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682499 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-combined-ca-bundle\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682565 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682611 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jr5z6\" (UniqueName: \"kubernetes.io/projected/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-kube-api-access-jr5z6\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682639 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682682 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-config\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682713 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682748 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74573dd4-c899-4229-b940-e2f82063aa84-logs\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682784 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-scripts\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682831 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-257z7\" (UniqueName: \"kubernetes.io/projected/74573dd4-c899-4229-b940-e2f82063aa84-kube-api-access-257z7\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682862 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.682900 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-config-data\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.685124 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.685124 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.685741 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.686177 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-combined-ca-bundle\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.686527 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74573dd4-c899-4229-b940-e2f82063aa84-logs\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.687078 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-config\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.687790 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.691369 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-scripts\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.699395 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-config-data\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.702978 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jr5z6\" (UniqueName: \"kubernetes.io/projected/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-kube-api-access-jr5z6\") pod \"dnsmasq-dns-fcfdd6f9f-jjcrb\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.711075 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-257z7\" (UniqueName: \"kubernetes.io/projected/74573dd4-c899-4229-b940-e2f82063aa84-kube-api-access-257z7\") pod \"placement-db-sync-gs5xh\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.713012 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8ppvg"] Nov 26 13:44:32 crc kubenswrapper[4695]: W1126 13:44:32.749027 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod839691bb_afa9_4142_ae17_518432ca3059.slice/crio-4a7f93a7385466a8607e854b9a7fbed3ca41773e4a226c2c983a7ea0903dae4c WatchSource:0}: Error finding container 4a7f93a7385466a8607e854b9a7fbed3ca41773e4a226c2c983a7ea0903dae4c: Status 404 returned error can't find the container with id 4a7f93a7385466a8607e854b9a7fbed3ca41773e4a226c2c983a7ea0903dae4c Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.781560 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gs5xh" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.791688 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.828244 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fb6l4"] Nov 26 13:44:32 crc kubenswrapper[4695]: I1126 13:44:32.839169 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-568d6b79b5-59hdk"] Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.294331 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-zzx52"] Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.294660 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cc28g"] Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.452592 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-568d6b79b5-59hdk" event={"ID":"905c51dc-7482-4b8c-acb2-49f72df79646","Type":"ContainerStarted","Data":"82887653b6fbf481cb33159d186cdd20326ae5f931008c202792f97ec4c19618"} Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.463337 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zzx52" event={"ID":"b6597360-8ab5-4bba-9137-fb4f57019c78","Type":"ContainerStarted","Data":"152033dfc0fd407e7789e447ee263dbbacf760515bd6620df59d63078eeafdc7"} Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.478491 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fb6l4" event={"ID":"1f95654b-fe99-4a25-a066-97e65d8566c8","Type":"ContainerStarted","Data":"2f43114ff0265cf53d49485559d0222504776bc249c10c9571ef9e3535b47ae7"} Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.480678 4695 generic.go:334] "Generic (PLEG): container finished" podID="839691bb-afa9-4142-ae17-518432ca3059" containerID="e326b35b859a4d3890faebeedeb8268f3701e50a2edc200d504a8a9978c5dc67" exitCode=0 Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.480780 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" event={"ID":"839691bb-afa9-4142-ae17-518432ca3059","Type":"ContainerDied","Data":"e326b35b859a4d3890faebeedeb8268f3701e50a2edc200d504a8a9978c5dc67"} Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.480810 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" event={"ID":"839691bb-afa9-4142-ae17-518432ca3059","Type":"ContainerStarted","Data":"4a7f93a7385466a8607e854b9a7fbed3ca41773e4a226c2c983a7ea0903dae4c"} Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.488978 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cc28g" event={"ID":"07c85245-6bcd-4580-a85f-51fa41122292","Type":"ContainerStarted","Data":"c98eaf489904687616a47d709bbd295bacb9833259e293fa217640fecfeef6cd"} Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.508607 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fb6l4" podStartSLOduration=2.508586012 podStartE2EDuration="2.508586012s" podCreationTimestamp="2025-11-26 13:44:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:44:33.508153189 +0000 UTC m=+1257.143978271" watchObservedRunningTime="2025-11-26 13:44:33.508586012 +0000 UTC m=+1257.144411094" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.806700 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.876278 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-s5q69"] Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.887660 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-nb\") pod \"839691bb-afa9-4142-ae17-518432ca3059\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.887721 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7t4w\" (UniqueName: \"kubernetes.io/projected/839691bb-afa9-4142-ae17-518432ca3059-kube-api-access-h7t4w\") pod \"839691bb-afa9-4142-ae17-518432ca3059\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.887783 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-swift-storage-0\") pod \"839691bb-afa9-4142-ae17-518432ca3059\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.887805 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-svc\") pod \"839691bb-afa9-4142-ae17-518432ca3059\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.887830 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-config\") pod \"839691bb-afa9-4142-ae17-518432ca3059\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.887871 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-sb\") pod \"839691bb-afa9-4142-ae17-518432ca3059\" (UID: \"839691bb-afa9-4142-ae17-518432ca3059\") " Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.898200 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/839691bb-afa9-4142-ae17-518432ca3059-kube-api-access-h7t4w" (OuterVolumeSpecName: "kube-api-access-h7t4w") pod "839691bb-afa9-4142-ae17-518432ca3059" (UID: "839691bb-afa9-4142-ae17-518432ca3059"). InnerVolumeSpecName "kube-api-access-h7t4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.909207 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-gs5xh"] Nov 26 13:44:33 crc kubenswrapper[4695]: W1126 13:44:33.920084 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74573dd4_c899_4229_b940_e2f82063aa84.slice/crio-230c1528b71f7d2bd0e16ba2bc0802afb040bf851a63466b97bc8726c749f966 WatchSource:0}: Error finding container 230c1528b71f7d2bd0e16ba2bc0802afb040bf851a63466b97bc8726c749f966: Status 404 returned error can't find the container with id 230c1528b71f7d2bd0e16ba2bc0802afb040bf851a63466b97bc8726c749f966 Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.925214 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.933692 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-86bf56c6d9-npgcp"] Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.938418 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "839691bb-afa9-4142-ae17-518432ca3059" (UID: "839691bb-afa9-4142-ae17-518432ca3059"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.940899 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-config" (OuterVolumeSpecName: "config") pod "839691bb-afa9-4142-ae17-518432ca3059" (UID: "839691bb-afa9-4142-ae17-518432ca3059"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.944267 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "839691bb-afa9-4142-ae17-518432ca3059" (UID: "839691bb-afa9-4142-ae17-518432ca3059"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.965167 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "839691bb-afa9-4142-ae17-518432ca3059" (UID: "839691bb-afa9-4142-ae17-518432ca3059"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.965179 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "839691bb-afa9-4142-ae17-518432ca3059" (UID: "839691bb-afa9-4142-ae17-518432ca3059"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.990191 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.990225 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7t4w\" (UniqueName: \"kubernetes.io/projected/839691bb-afa9-4142-ae17-518432ca3059-kube-api-access-h7t4w\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.990238 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.990248 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.990257 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.990267 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/839691bb-afa9-4142-ae17-518432ca3059-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:33 crc kubenswrapper[4695]: I1126 13:44:33.993573 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-jjcrb"] Nov 26 13:44:33 crc kubenswrapper[4695]: W1126 13:44:33.998146 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb414e1bf_87e4_4dbb_97a6_79a30fea1cbd.slice/crio-64c0c92862d34c9f663c2bdc22ccc93013fc85bb2585987ed416e96858a9aba4 WatchSource:0}: Error finding container 64c0c92862d34c9f663c2bdc22ccc93013fc85bb2585987ed416e96858a9aba4: Status 404 returned error can't find the container with id 64c0c92862d34c9f663c2bdc22ccc93013fc85bb2585987ed416e96858a9aba4 Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.490298 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-568d6b79b5-59hdk"] Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.508832 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" event={"ID":"839691bb-afa9-4142-ae17-518432ca3059","Type":"ContainerDied","Data":"4a7f93a7385466a8607e854b9a7fbed3ca41773e4a226c2c983a7ea0903dae4c"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.508889 4695 scope.go:117] "RemoveContainer" containerID="e326b35b859a4d3890faebeedeb8268f3701e50a2edc200d504a8a9978c5dc67" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.509032 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-8ppvg" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.522431 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cc28g" event={"ID":"07c85245-6bcd-4580-a85f-51fa41122292","Type":"ContainerStarted","Data":"66ac5dc54a713aeecd987298b3a2dd52ca1778c57dc5ab0e6bb6a64c8ad8e22e"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.541994 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.557074 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-75748864d5-5qdgb"] Nov 26 13:44:34 crc kubenswrapper[4695]: E1126 13:44:34.557448 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="839691bb-afa9-4142-ae17-518432ca3059" containerName="init" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.557465 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="839691bb-afa9-4142-ae17-518432ca3059" containerName="init" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.557638 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="839691bb-afa9-4142-ae17-518432ca3059" containerName="init" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.558517 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.585730 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerStarted","Data":"7f0a155d6e85e737fbf2ea58705d50b91ff69d8dde56dd2018afb77868426445"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.593497 4695 generic.go:334] "Generic (PLEG): container finished" podID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerID="9578544579d57605c4ccc3e02680e31d01338f9c6d05fcb0ab002a90b7ea2b8b" exitCode=0 Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.593754 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" event={"ID":"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd","Type":"ContainerDied","Data":"9578544579d57605c4ccc3e02680e31d01338f9c6d05fcb0ab002a90b7ea2b8b"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.593843 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" event={"ID":"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd","Type":"ContainerStarted","Data":"64c0c92862d34c9f663c2bdc22ccc93013fc85bb2585987ed416e96858a9aba4"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.604626 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-scripts\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.604687 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-horizon-secret-key\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.604725 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-config-data\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.604766 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65cdq\" (UniqueName: \"kubernetes.io/projected/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-kube-api-access-65cdq\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.604828 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-logs\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.613496 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75748864d5-5qdgb"] Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.619044 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gs5xh" event={"ID":"74573dd4-c899-4229-b940-e2f82063aa84","Type":"ContainerStarted","Data":"230c1528b71f7d2bd0e16ba2bc0802afb040bf851a63466b97bc8726c749f966"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.649565 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86bf56c6d9-npgcp" event={"ID":"851d72f2-c284-488f-a1f1-01a1728d5a18","Type":"ContainerStarted","Data":"c3b2fe0ebaa596fe48f3d08930b12ab27f4fa8e5e4393dc0a9a5f5202283fe25"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.652157 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cc28g" podStartSLOduration=2.6521330770000002 podStartE2EDuration="2.652133077s" podCreationTimestamp="2025-11-26 13:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:44:34.585999387 +0000 UTC m=+1258.221824469" watchObservedRunningTime="2025-11-26 13:44:34.652133077 +0000 UTC m=+1258.287958159" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.672782 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fb6l4" event={"ID":"1f95654b-fe99-4a25-a066-97e65d8566c8","Type":"ContainerStarted","Data":"eda59ffe971d04f35835e6a706624104e1248cb7938d7c9cfd9bf605fbb4cead"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.681536 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s5q69" event={"ID":"322aceb8-cfb2-478e-a586-68c3f43b3977","Type":"ContainerStarted","Data":"7e1e3528076b283371bec4557b74cced485da51bad6d97876e0a902e5b6e4210"} Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.707813 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-logs\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.707946 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-scripts\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.707977 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-horizon-secret-key\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.708052 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-config-data\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.708150 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65cdq\" (UniqueName: \"kubernetes.io/projected/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-kube-api-access-65cdq\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.709283 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-logs\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.724497 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8ppvg"] Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.730798 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-scripts\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.732548 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-config-data\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.741831 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-horizon-secret-key\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.742047 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65cdq\" (UniqueName: \"kubernetes.io/projected/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-kube-api-access-65cdq\") pod \"horizon-75748864d5-5qdgb\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.745051 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-8ppvg"] Nov 26 13:44:34 crc kubenswrapper[4695]: I1126 13:44:34.903868 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:44:35 crc kubenswrapper[4695]: I1126 13:44:35.182951 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="839691bb-afa9-4142-ae17-518432ca3059" path="/var/lib/kubelet/pods/839691bb-afa9-4142-ae17-518432ca3059/volumes" Nov 26 13:44:35 crc kubenswrapper[4695]: I1126 13:44:35.555432 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-75748864d5-5qdgb"] Nov 26 13:44:35 crc kubenswrapper[4695]: I1126 13:44:35.718507 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75748864d5-5qdgb" event={"ID":"07d2f5ab-b223-43e2-8f00-22eac68d4f5d","Type":"ContainerStarted","Data":"889cba8ce0e16d01eed0a3141c91a500769daa1b69ce9cbbeeb2ce3e6a289619"} Nov 26 13:44:35 crc kubenswrapper[4695]: I1126 13:44:35.773243 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" event={"ID":"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd","Type":"ContainerStarted","Data":"eaabf6c5a6ab5688fde904378a69b5e9c3d49476f21dd3407238972f496a5406"} Nov 26 13:44:35 crc kubenswrapper[4695]: I1126 13:44:35.773750 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:35 crc kubenswrapper[4695]: I1126 13:44:35.821704 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" podStartSLOduration=3.821678648 podStartE2EDuration="3.821678648s" podCreationTimestamp="2025-11-26 13:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:44:35.815613922 +0000 UTC m=+1259.451439004" watchObservedRunningTime="2025-11-26 13:44:35.821678648 +0000 UTC m=+1259.457503730" Nov 26 13:44:36 crc kubenswrapper[4695]: I1126 13:44:36.795072 4695 generic.go:334] "Generic (PLEG): container finished" podID="12ef57bb-7a18-4350-a2b8-86efd6babbe0" containerID="c5836f93af7c729db751b5363c438a5cca7867eb9aa3c91fa8795af36f4a29be" exitCode=0 Nov 26 13:44:36 crc kubenswrapper[4695]: I1126 13:44:36.795167 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xktkp" event={"ID":"12ef57bb-7a18-4350-a2b8-86efd6babbe0","Type":"ContainerDied","Data":"c5836f93af7c729db751b5363c438a5cca7867eb9aa3c91fa8795af36f4a29be"} Nov 26 13:44:37 crc kubenswrapper[4695]: I1126 13:44:37.810167 4695 generic.go:334] "Generic (PLEG): container finished" podID="1f95654b-fe99-4a25-a066-97e65d8566c8" containerID="eda59ffe971d04f35835e6a706624104e1248cb7938d7c9cfd9bf605fbb4cead" exitCode=0 Nov 26 13:44:37 crc kubenswrapper[4695]: I1126 13:44:37.810244 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fb6l4" event={"ID":"1f95654b-fe99-4a25-a066-97e65d8566c8","Type":"ContainerDied","Data":"eda59ffe971d04f35835e6a706624104e1248cb7938d7c9cfd9bf605fbb4cead"} Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.176522 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.204977 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xktkp" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.233840 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86bf56c6d9-npgcp"] Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.272726 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-config-data\") pod \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.272801 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-449nm\" (UniqueName: \"kubernetes.io/projected/1f95654b-fe99-4a25-a066-97e65d8566c8-kube-api-access-449nm\") pod \"1f95654b-fe99-4a25-a066-97e65d8566c8\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.272863 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-config-data\") pod \"1f95654b-fe99-4a25-a066-97e65d8566c8\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.272965 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-db-sync-config-data\") pod \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.273027 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn8jj\" (UniqueName: \"kubernetes.io/projected/12ef57bb-7a18-4350-a2b8-86efd6babbe0-kube-api-access-gn8jj\") pod \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.273056 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-combined-ca-bundle\") pod \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\" (UID: \"12ef57bb-7a18-4350-a2b8-86efd6babbe0\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.273096 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-scripts\") pod \"1f95654b-fe99-4a25-a066-97e65d8566c8\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.273129 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-combined-ca-bundle\") pod \"1f95654b-fe99-4a25-a066-97e65d8566c8\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.273162 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-fernet-keys\") pod \"1f95654b-fe99-4a25-a066-97e65d8566c8\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.273218 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-credential-keys\") pod \"1f95654b-fe99-4a25-a066-97e65d8566c8\" (UID: \"1f95654b-fe99-4a25-a066-97e65d8566c8\") " Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.280531 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1f95654b-fe99-4a25-a066-97e65d8566c8" (UID: "1f95654b-fe99-4a25-a066-97e65d8566c8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.283385 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-scripts" (OuterVolumeSpecName: "scripts") pod "1f95654b-fe99-4a25-a066-97e65d8566c8" (UID: "1f95654b-fe99-4a25-a066-97e65d8566c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.284333 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12ef57bb-7a18-4350-a2b8-86efd6babbe0-kube-api-access-gn8jj" (OuterVolumeSpecName: "kube-api-access-gn8jj") pod "12ef57bb-7a18-4350-a2b8-86efd6babbe0" (UID: "12ef57bb-7a18-4350-a2b8-86efd6babbe0"). InnerVolumeSpecName "kube-api-access-gn8jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.286768 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b4894b95b-8zpbh"] Nov 26 13:44:41 crc kubenswrapper[4695]: E1126 13:44:41.287135 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f95654b-fe99-4a25-a066-97e65d8566c8" containerName="keystone-bootstrap" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.287147 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f95654b-fe99-4a25-a066-97e65d8566c8" containerName="keystone-bootstrap" Nov 26 13:44:41 crc kubenswrapper[4695]: E1126 13:44:41.287166 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12ef57bb-7a18-4350-a2b8-86efd6babbe0" containerName="glance-db-sync" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.287174 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="12ef57bb-7a18-4350-a2b8-86efd6babbe0" containerName="glance-db-sync" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.287338 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f95654b-fe99-4a25-a066-97e65d8566c8" containerName="keystone-bootstrap" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.287359 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="12ef57bb-7a18-4350-a2b8-86efd6babbe0" containerName="glance-db-sync" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.288248 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.290215 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "12ef57bb-7a18-4350-a2b8-86efd6babbe0" (UID: "12ef57bb-7a18-4350-a2b8-86efd6babbe0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.302654 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.303074 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1f95654b-fe99-4a25-a066-97e65d8566c8" (UID: "1f95654b-fe99-4a25-a066-97e65d8566c8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.303289 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f95654b-fe99-4a25-a066-97e65d8566c8-kube-api-access-449nm" (OuterVolumeSpecName: "kube-api-access-449nm") pod "1f95654b-fe99-4a25-a066-97e65d8566c8" (UID: "1f95654b-fe99-4a25-a066-97e65d8566c8"). InnerVolumeSpecName "kube-api-access-449nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.315882 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b4894b95b-8zpbh"] Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.377744 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-secret-key\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.377824 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-tls-certs\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.377863 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-combined-ca-bundle\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.377912 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-scripts\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.377931 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-config-data\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.377970 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-logs\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.377987 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85mhv\" (UniqueName: \"kubernetes.io/projected/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-kube-api-access-85mhv\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.378052 4695 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.378063 4695 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.378074 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-449nm\" (UniqueName: \"kubernetes.io/projected/1f95654b-fe99-4a25-a066-97e65d8566c8-kube-api-access-449nm\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.378082 4695 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.378090 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn8jj\" (UniqueName: \"kubernetes.io/projected/12ef57bb-7a18-4350-a2b8-86efd6babbe0-kube-api-access-gn8jj\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.378098 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.413988 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12ef57bb-7a18-4350-a2b8-86efd6babbe0" (UID: "12ef57bb-7a18-4350-a2b8-86efd6babbe0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.414089 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-config-data" (OuterVolumeSpecName: "config-data") pod "12ef57bb-7a18-4350-a2b8-86efd6babbe0" (UID: "12ef57bb-7a18-4350-a2b8-86efd6babbe0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.418573 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-config-data" (OuterVolumeSpecName: "config-data") pod "1f95654b-fe99-4a25-a066-97e65d8566c8" (UID: "1f95654b-fe99-4a25-a066-97e65d8566c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.440604 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f95654b-fe99-4a25-a066-97e65d8566c8" (UID: "1f95654b-fe99-4a25-a066-97e65d8566c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.451625 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75748864d5-5qdgb"] Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.478957 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-tls-certs\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479021 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-combined-ca-bundle\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479081 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-scripts\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479107 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-config-data\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479156 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-logs\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479188 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85mhv\" (UniqueName: \"kubernetes.io/projected/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-kube-api-access-85mhv\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479241 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-secret-key\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479296 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479311 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479323 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12ef57bb-7a18-4350-a2b8-86efd6babbe0-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.479335 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f95654b-fe99-4a25-a066-97e65d8566c8-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.480461 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-scripts\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.482647 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-logs\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.483006 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-secret-key\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.483968 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-config-data\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.497136 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-combined-ca-bundle\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.500157 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-tls-certs\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.504058 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7d4c9c9dbd-9bbnw"] Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.511099 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85mhv\" (UniqueName: \"kubernetes.io/projected/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-kube-api-access-85mhv\") pod \"horizon-b4894b95b-8zpbh\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.522807 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.622643 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d4c9c9dbd-9bbnw"] Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.682521 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-horizon-tls-certs\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.682596 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-horizon-secret-key\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.682627 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-combined-ca-bundle\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.682647 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbbfb\" (UniqueName: \"kubernetes.io/projected/3ca1545d-04c5-45f8-8738-f662db77ffba-kube-api-access-mbbfb\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.682664 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ca1545d-04c5-45f8-8738-f662db77ffba-config-data\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.682690 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ca1545d-04c5-45f8-8738-f662db77ffba-logs\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.682736 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ca1545d-04c5-45f8-8738-f662db77ffba-scripts\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.778742 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.784269 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ca1545d-04c5-45f8-8738-f662db77ffba-scripts\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.784332 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-horizon-tls-certs\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.784391 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-horizon-secret-key\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.784422 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-combined-ca-bundle\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.784441 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbbfb\" (UniqueName: \"kubernetes.io/projected/3ca1545d-04c5-45f8-8738-f662db77ffba-kube-api-access-mbbfb\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.784457 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ca1545d-04c5-45f8-8738-f662db77ffba-config-data\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.784484 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ca1545d-04c5-45f8-8738-f662db77ffba-logs\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.785333 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ca1545d-04c5-45f8-8738-f662db77ffba-logs\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.785504 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ca1545d-04c5-45f8-8738-f662db77ffba-scripts\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.786414 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ca1545d-04c5-45f8-8738-f662db77ffba-config-data\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.788850 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-horizon-secret-key\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.788960 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-horizon-tls-certs\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.790064 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ca1545d-04c5-45f8-8738-f662db77ffba-combined-ca-bundle\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.801133 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbbfb\" (UniqueName: \"kubernetes.io/projected/3ca1545d-04c5-45f8-8738-f662db77ffba-kube-api-access-mbbfb\") pod \"horizon-7d4c9c9dbd-9bbnw\" (UID: \"3ca1545d-04c5-45f8-8738-f662db77ffba\") " pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.846533 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.857753 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fb6l4" event={"ID":"1f95654b-fe99-4a25-a066-97e65d8566c8","Type":"ContainerDied","Data":"2f43114ff0265cf53d49485559d0222504776bc249c10c9571ef9e3535b47ae7"} Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.857793 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f43114ff0265cf53d49485559d0222504776bc249c10c9571ef9e3535b47ae7" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.857807 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fb6l4" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.861808 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-xktkp" event={"ID":"12ef57bb-7a18-4350-a2b8-86efd6babbe0","Type":"ContainerDied","Data":"63a605cec1d87ce2a174da156af8e9128f8af6fa7fc2c4f883a46982b94469dd"} Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.861857 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a605cec1d87ce2a174da156af8e9128f8af6fa7fc2c4f883a46982b94469dd" Nov 26 13:44:41 crc kubenswrapper[4695]: I1126 13:44:41.861935 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-xktkp" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.358691 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fb6l4"] Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.362504 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fb6l4"] Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.460105 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5lf5p"] Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.461205 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.466622 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.466653 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.467327 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.467456 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.467466 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-55hqz" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.507309 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5lf5p"] Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.597974 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-fernet-keys\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.598039 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-credential-keys\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.598085 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2762m\" (UniqueName: \"kubernetes.io/projected/4e6d0467-d196-483d-a0af-c616fcffd987-kube-api-access-2762m\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.598121 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-scripts\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.598152 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-config-data\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.598185 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-combined-ca-bundle\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.637734 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-jjcrb"] Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.638008 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" containerID="cri-o://eaabf6c5a6ab5688fde904378a69b5e9c3d49476f21dd3407238972f496a5406" gracePeriod=10 Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.641641 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.669676 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-2w4b9"] Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.671167 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.699750 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-credential-keys\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.700013 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2762m\" (UniqueName: \"kubernetes.io/projected/4e6d0467-d196-483d-a0af-c616fcffd987-kube-api-access-2762m\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.700110 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-scripts\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.700225 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-config-data\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.700315 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-combined-ca-bundle\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.700444 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-fernet-keys\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.710260 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-config-data\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.720586 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-combined-ca-bundle\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.721493 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-fernet-keys\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.723198 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-scripts\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.723239 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2762m\" (UniqueName: \"kubernetes.io/projected/4e6d0467-d196-483d-a0af-c616fcffd987-kube-api-access-2762m\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.725261 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-credential-keys\") pod \"keystone-bootstrap-5lf5p\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.772491 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-2w4b9"] Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.792670 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.804569 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.804675 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-config\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.804711 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l9m7\" (UniqueName: \"kubernetes.io/projected/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-kube-api-access-9l9m7\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.804808 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.804865 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.804935 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.819097 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.906526 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.906825 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.906913 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-config\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.906944 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l9m7\" (UniqueName: \"kubernetes.io/projected/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-kube-api-access-9l9m7\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.907047 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.907067 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.907281 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.908169 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-config\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.908237 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.909706 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.910731 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.926464 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l9m7\" (UniqueName: \"kubernetes.io/projected/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-kube-api-access-9l9m7\") pod \"dnsmasq-dns-57c957c4ff-2w4b9\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:42 crc kubenswrapper[4695]: I1126 13:44:42.996864 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.174106 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f95654b-fe99-4a25-a066-97e65d8566c8" path="/var/lib/kubelet/pods/1f95654b-fe99-4a25-a066-97e65d8566c8/volumes" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.517862 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.519457 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.522577 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gwvqk" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.522758 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.527003 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.527065 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.723824 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.723893 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.723921 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2255\" (UniqueName: \"kubernetes.io/projected/f76608c1-3f59-41e2-bca1-9f1821b94c8d-kube-api-access-m2255\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.723945 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.723975 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-config-data\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.724018 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-logs\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.724081 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-scripts\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.790934 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.796331 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.804885 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.830424 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.830471 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-config-data\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.830525 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-logs\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.830585 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-scripts\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.830660 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.830699 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.830722 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2255\" (UniqueName: \"kubernetes.io/projected/f76608c1-3f59-41e2-bca1-9f1821b94c8d-kube-api-access-m2255\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.831293 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.832764 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-logs\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.832980 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.837844 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-config-data\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.838246 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-scripts\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.850379 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.856924 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2255\" (UniqueName: \"kubernetes.io/projected/f76608c1-3f59-41e2-bca1-9f1821b94c8d-kube-api-access-m2255\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.860232 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.874570 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " pod="openstack/glance-default-external-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.885420 4695 generic.go:334] "Generic (PLEG): container finished" podID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerID="eaabf6c5a6ab5688fde904378a69b5e9c3d49476f21dd3407238972f496a5406" exitCode=0 Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.885476 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" event={"ID":"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd","Type":"ContainerDied","Data":"eaabf6c5a6ab5688fde904378a69b5e9c3d49476f21dd3407238972f496a5406"} Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.932702 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.933048 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9rb2\" (UniqueName: \"kubernetes.io/projected/6480f507-2af8-48e4-bf5c-89e2783ec61e-kube-api-access-z9rb2\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.933095 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.933207 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.933261 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.933321 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:43 crc kubenswrapper[4695]: I1126 13:44:43.933387 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.035288 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9rb2\" (UniqueName: \"kubernetes.io/projected/6480f507-2af8-48e4-bf5c-89e2783ec61e-kube-api-access-z9rb2\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.035344 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.035415 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.035454 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.035484 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.035510 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.035601 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.036249 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.039384 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-logs\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.039447 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.040658 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.047017 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.047551 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.060302 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9rb2\" (UniqueName: \"kubernetes.io/projected/6480f507-2af8-48e4-bf5c-89e2783ec61e-kube-api-access-z9rb2\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.075517 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.155713 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:44:44 crc kubenswrapper[4695]: I1126 13:44:44.236797 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:44:45 crc kubenswrapper[4695]: I1126 13:44:45.530899 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:44:45 crc kubenswrapper[4695]: I1126 13:44:45.598346 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:44:47 crc kubenswrapper[4695]: E1126 13:44:47.123859 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 26 13:44:47 crc kubenswrapper[4695]: E1126 13:44:47.124525 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzvqm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-s5q69_openstack(322aceb8-cfb2-478e-a586-68c3f43b3977): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:44:47 crc kubenswrapper[4695]: E1126 13:44:47.126468 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-s5q69" podUID="322aceb8-cfb2-478e-a586-68c3f43b3977" Nov 26 13:44:47 crc kubenswrapper[4695]: I1126 13:44:47.792928 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: connect: connection refused" Nov 26 13:44:47 crc kubenswrapper[4695]: E1126 13:44:47.938998 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-s5q69" podUID="322aceb8-cfb2-478e-a586-68c3f43b3977" Nov 26 13:44:51 crc kubenswrapper[4695]: E1126 13:44:51.737103 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 26 13:44:51 crc kubenswrapper[4695]: E1126 13:44:51.737643 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6ch54bh584h655h6h5f6h97h687h5fbh5d5h5d8h5b4h7h659h64bh546h589h5d7h8fh7ch677h66ch67ch59h68h685hdch674h5fdh7dh8bhfdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-65cdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-75748864d5-5qdgb_openstack(07d2f5ab-b223-43e2-8f00-22eac68d4f5d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:44:51 crc kubenswrapper[4695]: E1126 13:44:51.740650 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-75748864d5-5qdgb" podUID="07d2f5ab-b223-43e2-8f00-22eac68d4f5d" Nov 26 13:44:51 crc kubenswrapper[4695]: E1126 13:44:51.751146 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 26 13:44:51 crc kubenswrapper[4695]: E1126 13:44:51.751319 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n699h57dh64h8dh5dfh55hf9hf6h657h548h59dh648h5fchbfh58ch8ch675h66fh9hc8h689hb4h544h57ch566h7dh8fh6dh5fhd5h675h648q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7zq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-86bf56c6d9-npgcp_openstack(851d72f2-c284-488f-a1f1-01a1728d5a18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:44:51 crc kubenswrapper[4695]: E1126 13:44:51.753419 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-86bf56c6d9-npgcp" podUID="851d72f2-c284-488f-a1f1-01a1728d5a18" Nov 26 13:44:52 crc kubenswrapper[4695]: E1126 13:44:52.170215 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Nov 26 13:44:52 crc kubenswrapper[4695]: E1126 13:44:52.170397 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5h55h5dbh5ddh7dhd7h6h567h6h6dhc6h8dh5cdh66h65bh8h98h68fh64ch5b4h55bh567h55ch65fh67bhf5h579h586h566h577h557hdbq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-grlfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(a9c800b1-62f2-42d6-a64c-95a673861ebb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:44:52 crc kubenswrapper[4695]: E1126 13:44:52.211288 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Nov 26 13:44:52 crc kubenswrapper[4695]: E1126 13:44:52.211490 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n694h5bch6fh5bdh6dh565hdbh4h654h67chfdhbh85h75h5dch59h658h68dhdh596h84h56chb8h78h5c6hf9h99hcbh54ch696hfdh687q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qn58t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-568d6b79b5-59hdk_openstack(905c51dc-7482-4b8c-acb2-49f72df79646): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:44:52 crc kubenswrapper[4695]: E1126 13:44:52.223825 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-568d6b79b5-59hdk" podUID="905c51dc-7482-4b8c-acb2-49f72df79646" Nov 26 13:44:55 crc kubenswrapper[4695]: I1126 13:44:55.009511 4695 generic.go:334] "Generic (PLEG): container finished" podID="07c85245-6bcd-4580-a85f-51fa41122292" containerID="66ac5dc54a713aeecd987298b3a2dd52ca1778c57dc5ab0e6bb6a64c8ad8e22e" exitCode=0 Nov 26 13:44:55 crc kubenswrapper[4695]: I1126 13:44:55.010077 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cc28g" event={"ID":"07c85245-6bcd-4580-a85f-51fa41122292","Type":"ContainerDied","Data":"66ac5dc54a713aeecd987298b3a2dd52ca1778c57dc5ab0e6bb6a64c8ad8e22e"} Nov 26 13:44:57 crc kubenswrapper[4695]: I1126 13:44:57.792938 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: i/o timeout" Nov 26 13:44:57 crc kubenswrapper[4695]: I1126 13:44:57.793756 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.143478 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t"] Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.144896 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.148238 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.148312 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.157108 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t"] Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.255913 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-secret-volume\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.256056 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-config-volume\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.256103 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwg99\" (UniqueName: \"kubernetes.io/projected/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-kube-api-access-cwg99\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.358023 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-config-volume\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.358285 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwg99\" (UniqueName: \"kubernetes.io/projected/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-kube-api-access-cwg99\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.358415 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-secret-volume\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.359815 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-config-volume\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.364189 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-secret-volume\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.381901 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwg99\" (UniqueName: \"kubernetes.io/projected/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-kube-api-access-cwg99\") pod \"collect-profiles-29402745-8ms7t\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.466642 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.471012 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.475283 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.481836 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.503081 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cc28g" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.523803 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571734 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-config\") pod \"07c85245-6bcd-4580-a85f-51fa41122292\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571769 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-combined-ca-bundle\") pod \"07c85245-6bcd-4580-a85f-51fa41122292\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571787 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr5z6\" (UniqueName: \"kubernetes.io/projected/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-kube-api-access-jr5z6\") pod \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571804 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-nb\") pod \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571838 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7zq9\" (UniqueName: \"kubernetes.io/projected/851d72f2-c284-488f-a1f1-01a1728d5a18-kube-api-access-r7zq9\") pod \"851d72f2-c284-488f-a1f1-01a1728d5a18\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571872 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-sb\") pod \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571887 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjgt5\" (UniqueName: \"kubernetes.io/projected/07c85245-6bcd-4580-a85f-51fa41122292-kube-api-access-zjgt5\") pod \"07c85245-6bcd-4580-a85f-51fa41122292\" (UID: \"07c85245-6bcd-4580-a85f-51fa41122292\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571913 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-scripts\") pod \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571936 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-scripts\") pod \"851d72f2-c284-488f-a1f1-01a1728d5a18\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571954 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-horizon-secret-key\") pod \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.571985 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-config-data\") pod \"905c51dc-7482-4b8c-acb2-49f72df79646\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572004 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-svc\") pod \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572042 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/851d72f2-c284-488f-a1f1-01a1728d5a18-horizon-secret-key\") pod \"851d72f2-c284-488f-a1f1-01a1728d5a18\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572059 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-config\") pod \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572073 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-scripts\") pod \"905c51dc-7482-4b8c-acb2-49f72df79646\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572113 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-logs\") pod \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572129 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qn58t\" (UniqueName: \"kubernetes.io/projected/905c51dc-7482-4b8c-acb2-49f72df79646-kube-api-access-qn58t\") pod \"905c51dc-7482-4b8c-acb2-49f72df79646\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572148 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-config-data\") pod \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572167 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-config-data\") pod \"851d72f2-c284-488f-a1f1-01a1728d5a18\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572215 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-swift-storage-0\") pod \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\" (UID: \"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572252 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905c51dc-7482-4b8c-acb2-49f72df79646-logs\") pod \"905c51dc-7482-4b8c-acb2-49f72df79646\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572277 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905c51dc-7482-4b8c-acb2-49f72df79646-horizon-secret-key\") pod \"905c51dc-7482-4b8c-acb2-49f72df79646\" (UID: \"905c51dc-7482-4b8c-acb2-49f72df79646\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572325 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65cdq\" (UniqueName: \"kubernetes.io/projected/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-kube-api-access-65cdq\") pod \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\" (UID: \"07d2f5ab-b223-43e2-8f00-22eac68d4f5d\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572365 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851d72f2-c284-488f-a1f1-01a1728d5a18-logs\") pod \"851d72f2-c284-488f-a1f1-01a1728d5a18\" (UID: \"851d72f2-c284-488f-a1f1-01a1728d5a18\") " Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.572975 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851d72f2-c284-488f-a1f1-01a1728d5a18-logs" (OuterVolumeSpecName: "logs") pod "851d72f2-c284-488f-a1f1-01a1728d5a18" (UID: "851d72f2-c284-488f-a1f1-01a1728d5a18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.573651 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-config-data" (OuterVolumeSpecName: "config-data") pod "851d72f2-c284-488f-a1f1-01a1728d5a18" (UID: "851d72f2-c284-488f-a1f1-01a1728d5a18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.573898 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-config-data" (OuterVolumeSpecName: "config-data") pod "07d2f5ab-b223-43e2-8f00-22eac68d4f5d" (UID: "07d2f5ab-b223-43e2-8f00-22eac68d4f5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.574375 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-scripts" (OuterVolumeSpecName: "scripts") pod "905c51dc-7482-4b8c-acb2-49f72df79646" (UID: "905c51dc-7482-4b8c-acb2-49f72df79646"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.574532 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-logs" (OuterVolumeSpecName: "logs") pod "07d2f5ab-b223-43e2-8f00-22eac68d4f5d" (UID: "07d2f5ab-b223-43e2-8f00-22eac68d4f5d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.576175 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-config-data" (OuterVolumeSpecName: "config-data") pod "905c51dc-7482-4b8c-acb2-49f72df79646" (UID: "905c51dc-7482-4b8c-acb2-49f72df79646"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.576533 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-scripts" (OuterVolumeSpecName: "scripts") pod "07d2f5ab-b223-43e2-8f00-22eac68d4f5d" (UID: "07d2f5ab-b223-43e2-8f00-22eac68d4f5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.577195 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-scripts" (OuterVolumeSpecName: "scripts") pod "851d72f2-c284-488f-a1f1-01a1728d5a18" (UID: "851d72f2-c284-488f-a1f1-01a1728d5a18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.591620 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c85245-6bcd-4580-a85f-51fa41122292-kube-api-access-zjgt5" (OuterVolumeSpecName: "kube-api-access-zjgt5") pod "07c85245-6bcd-4580-a85f-51fa41122292" (UID: "07c85245-6bcd-4580-a85f-51fa41122292"). InnerVolumeSpecName "kube-api-access-zjgt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.593162 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/905c51dc-7482-4b8c-acb2-49f72df79646-logs" (OuterVolumeSpecName: "logs") pod "905c51dc-7482-4b8c-acb2-49f72df79646" (UID: "905c51dc-7482-4b8c-acb2-49f72df79646"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.598523 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851d72f2-c284-488f-a1f1-01a1728d5a18-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "851d72f2-c284-488f-a1f1-01a1728d5a18" (UID: "851d72f2-c284-488f-a1f1-01a1728d5a18"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.600881 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-kube-api-access-65cdq" (OuterVolumeSpecName: "kube-api-access-65cdq") pod "07d2f5ab-b223-43e2-8f00-22eac68d4f5d" (UID: "07d2f5ab-b223-43e2-8f00-22eac68d4f5d"). InnerVolumeSpecName "kube-api-access-65cdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.601160 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-kube-api-access-jr5z6" (OuterVolumeSpecName: "kube-api-access-jr5z6") pod "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" (UID: "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd"). InnerVolumeSpecName "kube-api-access-jr5z6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.618948 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905c51dc-7482-4b8c-acb2-49f72df79646-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "905c51dc-7482-4b8c-acb2-49f72df79646" (UID: "905c51dc-7482-4b8c-acb2-49f72df79646"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.619777 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905c51dc-7482-4b8c-acb2-49f72df79646-kube-api-access-qn58t" (OuterVolumeSpecName: "kube-api-access-qn58t") pod "905c51dc-7482-4b8c-acb2-49f72df79646" (UID: "905c51dc-7482-4b8c-acb2-49f72df79646"). InnerVolumeSpecName "kube-api-access-qn58t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.620718 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "07d2f5ab-b223-43e2-8f00-22eac68d4f5d" (UID: "07d2f5ab-b223-43e2-8f00-22eac68d4f5d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.621403 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851d72f2-c284-488f-a1f1-01a1728d5a18-kube-api-access-r7zq9" (OuterVolumeSpecName: "kube-api-access-r7zq9") pod "851d72f2-c284-488f-a1f1-01a1728d5a18" (UID: "851d72f2-c284-488f-a1f1-01a1728d5a18"). InnerVolumeSpecName "kube-api-access-r7zq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.630473 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07c85245-6bcd-4580-a85f-51fa41122292" (UID: "07c85245-6bcd-4580-a85f-51fa41122292"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.644479 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-config" (OuterVolumeSpecName: "config") pod "07c85245-6bcd-4580-a85f-51fa41122292" (UID: "07c85245-6bcd-4580-a85f-51fa41122292"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.650041 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-config" (OuterVolumeSpecName: "config") pod "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" (UID: "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.661504 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" (UID: "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.669933 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" (UID: "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674395 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/905c51dc-7482-4b8c-acb2-49f72df79646-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674436 4695 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/905c51dc-7482-4b8c-acb2-49f72df79646-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674475 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65cdq\" (UniqueName: \"kubernetes.io/projected/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-kube-api-access-65cdq\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674491 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/851d72f2-c284-488f-a1f1-01a1728d5a18-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674502 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674513 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07c85245-6bcd-4580-a85f-51fa41122292-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674526 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jr5z6\" (UniqueName: \"kubernetes.io/projected/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-kube-api-access-jr5z6\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674537 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7zq9\" (UniqueName: \"kubernetes.io/projected/851d72f2-c284-488f-a1f1-01a1728d5a18-kube-api-access-r7zq9\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674550 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjgt5\" (UniqueName: \"kubernetes.io/projected/07c85245-6bcd-4580-a85f-51fa41122292-kube-api-access-zjgt5\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674561 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674571 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674581 4695 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674592 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674606 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674619 4695 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/851d72f2-c284-488f-a1f1-01a1728d5a18-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674631 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674641 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/905c51dc-7482-4b8c-acb2-49f72df79646-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674651 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674661 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qn58t\" (UniqueName: \"kubernetes.io/projected/905c51dc-7482-4b8c-acb2-49f72df79646-kube-api-access-qn58t\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674672 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07d2f5ab-b223-43e2-8f00-22eac68d4f5d-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674682 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/851d72f2-c284-488f-a1f1-01a1728d5a18-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.674692 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.678240 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" (UID: "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.683524 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" (UID: "b414e1bf-87e4-4dbb-97a6-79a30fea1cbd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.776075 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:00 crc kubenswrapper[4695]: I1126 13:45:00.776110 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.067214 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-75748864d5-5qdgb" event={"ID":"07d2f5ab-b223-43e2-8f00-22eac68d4f5d","Type":"ContainerDied","Data":"889cba8ce0e16d01eed0a3141c91a500769daa1b69ce9cbbeeb2ce3e6a289619"} Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.067254 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-75748864d5-5qdgb" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.068836 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-568d6b79b5-59hdk" event={"ID":"905c51dc-7482-4b8c-acb2-49f72df79646","Type":"ContainerDied","Data":"82887653b6fbf481cb33159d186cdd20326ae5f931008c202792f97ec4c19618"} Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.068950 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-568d6b79b5-59hdk" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.083533 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-86bf56c6d9-npgcp" event={"ID":"851d72f2-c284-488f-a1f1-01a1728d5a18","Type":"ContainerDied","Data":"c3b2fe0ebaa596fe48f3d08930b12ab27f4fa8e5e4393dc0a9a5f5202283fe25"} Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.083728 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-86bf56c6d9-npgcp" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.088138 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cc28g" event={"ID":"07c85245-6bcd-4580-a85f-51fa41122292","Type":"ContainerDied","Data":"c98eaf489904687616a47d709bbd295bacb9833259e293fa217640fecfeef6cd"} Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.088198 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c98eaf489904687616a47d709bbd295bacb9833259e293fa217640fecfeef6cd" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.088283 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cc28g" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.108824 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" event={"ID":"b414e1bf-87e4-4dbb-97a6-79a30fea1cbd","Type":"ContainerDied","Data":"64c0c92862d34c9f663c2bdc22ccc93013fc85bb2585987ed416e96858a9aba4"} Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.108868 4695 scope.go:117] "RemoveContainer" containerID="eaabf6c5a6ab5688fde904378a69b5e9c3d49476f21dd3407238972f496a5406" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.109014 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.143864 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-75748864d5-5qdgb"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.152205 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-75748864d5-5qdgb"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.185392 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07d2f5ab-b223-43e2-8f00-22eac68d4f5d" path="/var/lib/kubelet/pods/07d2f5ab-b223-43e2-8f00-22eac68d4f5d/volumes" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.201049 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-568d6b79b5-59hdk"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.230241 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-568d6b79b5-59hdk"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.253992 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-86bf56c6d9-npgcp"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.263000 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-86bf56c6d9-npgcp"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.270002 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-jjcrb"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.276419 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-jjcrb"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.703153 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-2w4b9"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.748643 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9k8dw"] Nov 26 13:45:01 crc kubenswrapper[4695]: E1126 13:45:01.749002 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.749014 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" Nov 26 13:45:01 crc kubenswrapper[4695]: E1126 13:45:01.749032 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c85245-6bcd-4580-a85f-51fa41122292" containerName="neutron-db-sync" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.749038 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c85245-6bcd-4580-a85f-51fa41122292" containerName="neutron-db-sync" Nov 26 13:45:01 crc kubenswrapper[4695]: E1126 13:45:01.749049 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="init" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.749055 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="init" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.749213 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c85245-6bcd-4580-a85f-51fa41122292" containerName="neutron-db-sync" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.749230 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.751867 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.771478 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9k8dw"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.896296 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.896659 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.896852 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.896994 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8k8l\" (UniqueName: \"kubernetes.io/projected/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-kube-api-access-w8k8l\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.897157 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-config\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.897281 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.946972 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-95668c77b-9hn77"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.949522 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.951323 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pw8jw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.951858 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.953765 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.958772 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.962395 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95668c77b-9hn77"] Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999002 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-combined-ca-bundle\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999077 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-config\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999118 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rl9c\" (UniqueName: \"kubernetes.io/projected/90feb883-0869-4d2d-bce9-1678291bf72f-kube-api-access-9rl9c\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999163 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999223 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999254 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-ovndb-tls-certs\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999279 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-httpd-config\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999310 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999429 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999471 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-config\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:01 crc kubenswrapper[4695]: I1126 13:45:01.999515 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8k8l\" (UniqueName: \"kubernetes.io/projected/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-kube-api-access-w8k8l\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.000158 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-config\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.000247 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.000477 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.000606 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.000826 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.033937 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8k8l\" (UniqueName: \"kubernetes.io/projected/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-kube-api-access-w8k8l\") pod \"dnsmasq-dns-5ccc5c4795-9k8dw\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.074431 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.101569 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-combined-ca-bundle\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.101649 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rl9c\" (UniqueName: \"kubernetes.io/projected/90feb883-0869-4d2d-bce9-1678291bf72f-kube-api-access-9rl9c\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.101720 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-ovndb-tls-certs\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.101744 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-httpd-config\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.101847 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-config\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.109966 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-httpd-config\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.112035 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-combined-ca-bundle\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.119442 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-config\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.120312 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-ovndb-tls-certs\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.132194 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rl9c\" (UniqueName: \"kubernetes.io/projected/90feb883-0869-4d2d-bce9-1678291bf72f-kube-api-access-9rl9c\") pod \"neutron-95668c77b-9hn77\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.267239 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:02 crc kubenswrapper[4695]: E1126 13:45:02.517186 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 26 13:45:02 crc kubenswrapper[4695]: E1126 13:45:02.517336 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2hcjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-zzx52_openstack(b6597360-8ab5-4bba-9137-fb4f57019c78): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 13:45:02 crc kubenswrapper[4695]: E1126 13:45:02.520576 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-zzx52" podUID="b6597360-8ab5-4bba-9137-fb4f57019c78" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.794442 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-fcfdd6f9f-jjcrb" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.141:5353: i/o timeout" Nov 26 13:45:02 crc kubenswrapper[4695]: I1126 13:45:02.996656 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-2w4b9"] Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.079729 4695 scope.go:117] "RemoveContainer" containerID="9578544579d57605c4ccc3e02680e31d01338f9c6d05fcb0ab002a90b7ea2b8b" Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.144820 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" event={"ID":"ae628a6f-5a57-4bef-83d3-c551bb4edfe1","Type":"ContainerStarted","Data":"1a94a7e13ce3dc17b9995d461a45a4f70b5f77b77b7784b57c12ee891ab78e6e"} Nov 26 13:45:03 crc kubenswrapper[4695]: E1126 13:45:03.158842 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-zzx52" podUID="b6597360-8ab5-4bba-9137-fb4f57019c78" Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.199495 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851d72f2-c284-488f-a1f1-01a1728d5a18" path="/var/lib/kubelet/pods/851d72f2-c284-488f-a1f1-01a1728d5a18/volumes" Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.199985 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905c51dc-7482-4b8c-acb2-49f72df79646" path="/var/lib/kubelet/pods/905c51dc-7482-4b8c-acb2-49f72df79646/volumes" Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.200358 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b414e1bf-87e4-4dbb-97a6-79a30fea1cbd" path="/var/lib/kubelet/pods/b414e1bf-87e4-4dbb-97a6-79a30fea1cbd/volumes" Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.442605 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7d4c9c9dbd-9bbnw"] Nov 26 13:45:03 crc kubenswrapper[4695]: W1126 13:45:03.528466 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ca1545d_04c5_45f8_8738_f662db77ffba.slice/crio-98327958aa1774c8fa10280db44bc460ff991fcb38cef8e3d3e9c4e175e52d94 WatchSource:0}: Error finding container 98327958aa1774c8fa10280db44bc460ff991fcb38cef8e3d3e9c4e175e52d94: Status 404 returned error can't find the container with id 98327958aa1774c8fa10280db44bc460ff991fcb38cef8e3d3e9c4e175e52d94 Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.576393 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5lf5p"] Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.707201 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b4894b95b-8zpbh"] Nov 26 13:45:03 crc kubenswrapper[4695]: W1126 13:45:03.716536 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda8b0d7_85f5_4274_a12e_a09982b9fe3c.slice/crio-100c628a2463b22ba34178fe8be9552e81b2024ed522772158f73e210c39274f WatchSource:0}: Error finding container 100c628a2463b22ba34178fe8be9552e81b2024ed522772158f73e210c39274f: Status 404 returned error can't find the container with id 100c628a2463b22ba34178fe8be9552e81b2024ed522772158f73e210c39274f Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.759016 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:45:03 crc kubenswrapper[4695]: W1126 13:45:03.771604 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6480f507_2af8_48e4_bf5c_89e2783ec61e.slice/crio-6e8a02585c0ec98fbf8f4eb42abf3a560ea2437f3206ca53c9a8495883a43e31 WatchSource:0}: Error finding container 6e8a02585c0ec98fbf8f4eb42abf3a560ea2437f3206ca53c9a8495883a43e31: Status 404 returned error can't find the container with id 6e8a02585c0ec98fbf8f4eb42abf3a560ea2437f3206ca53c9a8495883a43e31 Nov 26 13:45:03 crc kubenswrapper[4695]: I1126 13:45:03.915558 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9k8dw"] Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.066082 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-95668c77b-9hn77"] Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.160388 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t"] Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.193570 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s5q69" event={"ID":"322aceb8-cfb2-478e-a586-68c3f43b3977","Type":"ContainerStarted","Data":"73bcfc8a9ba792371d225d90aaab498137bf2fa8860559fdb11e2e86166be2ab"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.201095 4695 generic.go:334] "Generic (PLEG): container finished" podID="ae628a6f-5a57-4bef-83d3-c551bb4edfe1" containerID="9c566245e370e41c120599c944f6d361caaf398e8217cbd0001a68a4d7b0ba2f" exitCode=0 Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.201182 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" event={"ID":"ae628a6f-5a57-4bef-83d3-c551bb4edfe1","Type":"ContainerDied","Data":"9c566245e370e41c120599c944f6d361caaf398e8217cbd0001a68a4d7b0ba2f"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.211020 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerStarted","Data":"d1d5dd367b065a5db2b4a6fd92d8ffbd7ec67e7670b62d10af5efbb6e945d944"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.215974 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-s5q69" podStartSLOduration=2.811205749 podStartE2EDuration="32.215956952s" podCreationTimestamp="2025-11-26 13:44:32 +0000 UTC" firstStartedPulling="2025-11-26 13:44:33.892899518 +0000 UTC m=+1257.528724600" lastFinishedPulling="2025-11-26 13:45:03.297650731 +0000 UTC m=+1286.933475803" observedRunningTime="2025-11-26 13:45:04.212059777 +0000 UTC m=+1287.847884859" watchObservedRunningTime="2025-11-26 13:45:04.215956952 +0000 UTC m=+1287.851782034" Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.233179 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95668c77b-9hn77" event={"ID":"90feb883-0869-4d2d-bce9-1678291bf72f","Type":"ContainerStarted","Data":"f8daf9b3168701814e3418386708739810bdc51b3876636ecf4aecca1d62ca39"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.264849 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lf5p" event={"ID":"4e6d0467-d196-483d-a0af-c616fcffd987","Type":"ContainerStarted","Data":"1e5cb882573728f88557512418f95b54b847dccc4aa2f5b2ac0d528ee616bf5c"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.264886 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lf5p" event={"ID":"4e6d0467-d196-483d-a0af-c616fcffd987","Type":"ContainerStarted","Data":"f2e48c08121d0de5226cc3abb106028e00f9f767a400c6c6d7a562e1fb509cce"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.270975 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6480f507-2af8-48e4-bf5c-89e2783ec61e","Type":"ContainerStarted","Data":"6e8a02585c0ec98fbf8f4eb42abf3a560ea2437f3206ca53c9a8495883a43e31"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.273338 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d4c9c9dbd-9bbnw" event={"ID":"3ca1545d-04c5-45f8-8738-f662db77ffba","Type":"ContainerStarted","Data":"98327958aa1774c8fa10280db44bc460ff991fcb38cef8e3d3e9c4e175e52d94"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.275794 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gs5xh" event={"ID":"74573dd4-c899-4229-b940-e2f82063aa84","Type":"ContainerStarted","Data":"d7964da8f13d02f415806519a7fa08d01150a9fc7b8a57d489bc022ef55b8fb6"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.278230 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" event={"ID":"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa","Type":"ContainerStarted","Data":"4e02fe4b3dafbf31e4828b96526099709be611acf4896d80184d6225f976f433"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.311048 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4894b95b-8zpbh" event={"ID":"fda8b0d7-85f5-4274-a12e-a09982b9fe3c","Type":"ContainerStarted","Data":"100c628a2463b22ba34178fe8be9552e81b2024ed522772158f73e210c39274f"} Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.326234 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5lf5p" podStartSLOduration=22.326215102 podStartE2EDuration="22.326215102s" podCreationTimestamp="2025-11-26 13:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:04.286537225 +0000 UTC m=+1287.922362318" watchObservedRunningTime="2025-11-26 13:45:04.326215102 +0000 UTC m=+1287.962040184" Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.329150 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-gs5xh" podStartSLOduration=5.953724813 podStartE2EDuration="32.329140567s" podCreationTimestamp="2025-11-26 13:44:32 +0000 UTC" firstStartedPulling="2025-11-26 13:44:33.925361353 +0000 UTC m=+1257.561186445" lastFinishedPulling="2025-11-26 13:45:00.300777117 +0000 UTC m=+1283.936602199" observedRunningTime="2025-11-26 13:45:04.318877526 +0000 UTC m=+1287.954702608" watchObservedRunningTime="2025-11-26 13:45:04.329140567 +0000 UTC m=+1287.964965649" Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.708849 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:45:04 crc kubenswrapper[4695]: I1126 13:45:04.940079 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.037462 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-svc\") pod \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.037516 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-config\") pod \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.037631 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l9m7\" (UniqueName: \"kubernetes.io/projected/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-kube-api-access-9l9m7\") pod \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.037669 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-sb\") pod \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.037686 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-swift-storage-0\") pod \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.037778 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-nb\") pod \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\" (UID: \"ae628a6f-5a57-4bef-83d3-c551bb4edfe1\") " Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.110377 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bcbb85b97-9vnqg"] Nov 26 13:45:05 crc kubenswrapper[4695]: E1126 13:45:05.111132 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae628a6f-5a57-4bef-83d3-c551bb4edfe1" containerName="init" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.111152 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae628a6f-5a57-4bef-83d3-c551bb4edfe1" containerName="init" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.111442 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae628a6f-5a57-4bef-83d3-c551bb4edfe1" containerName="init" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.112584 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.118669 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.118785 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bcbb85b97-9vnqg"] Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.137980 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.204039 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-kube-api-access-9l9m7" (OuterVolumeSpecName: "kube-api-access-9l9m7") pod "ae628a6f-5a57-4bef-83d3-c551bb4edfe1" (UID: "ae628a6f-5a57-4bef-83d3-c551bb4edfe1"). InnerVolumeSpecName "kube-api-access-9l9m7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240495 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-config\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240538 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8kh\" (UniqueName: \"kubernetes.io/projected/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-kube-api-access-4r8kh\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240588 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-httpd-config\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240707 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-ovndb-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240786 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-public-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240811 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-internal-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240833 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-combined-ca-bundle\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.240890 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l9m7\" (UniqueName: \"kubernetes.io/projected/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-kube-api-access-9l9m7\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.341931 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-combined-ca-bundle\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.341981 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-config\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.342001 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r8kh\" (UniqueName: \"kubernetes.io/projected/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-kube-api-access-4r8kh\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.342020 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-httpd-config\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.342106 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-ovndb-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.342164 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-public-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.342184 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-internal-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.349557 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95668c77b-9hn77" event={"ID":"90feb883-0869-4d2d-bce9-1678291bf72f","Type":"ContainerStarted","Data":"41778b070eb0d881071a8787448e9bbbda5f5c824a871136eaa5b506fe14849d"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.350300 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-combined-ca-bundle\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.355194 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d4c9c9dbd-9bbnw" event={"ID":"3ca1545d-04c5-45f8-8738-f662db77ffba","Type":"ContainerStarted","Data":"994d687a792d608c542b05e9e6474c48079eb208daa61f2167a110cb5dfcde94"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.358060 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76608c1-3f59-41e2-bca1-9f1821b94c8d","Type":"ContainerStarted","Data":"9ffebbe0d2bb08bf421948915a7f033249dc1287566be54454b240401eec4adc"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.360462 4695 generic.go:334] "Generic (PLEG): container finished" podID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerID="5d8f328cef2470d66242640f44a0eec0be992c9749cde6cdbff1c4820e45d41e" exitCode=0 Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.361954 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" event={"ID":"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa","Type":"ContainerDied","Data":"5d8f328cef2470d66242640f44a0eec0be992c9749cde6cdbff1c4820e45d41e"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.369060 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-ovndb-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.370252 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-config\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.378884 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r8kh\" (UniqueName: \"kubernetes.io/projected/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-kube-api-access-4r8kh\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.379928 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-public-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.380523 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-httpd-config\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.384959 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9c6bc4a2-d4c9-4bdc-a576-03bf4101b606-internal-tls-certs\") pod \"neutron-bcbb85b97-9vnqg\" (UID: \"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606\") " pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.395252 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.395590 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-2w4b9" event={"ID":"ae628a6f-5a57-4bef-83d3-c551bb4edfe1","Type":"ContainerDied","Data":"1a94a7e13ce3dc17b9995d461a45a4f70b5f77b77b7784b57c12ee891ab78e6e"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.395668 4695 scope.go:117] "RemoveContainer" containerID="9c566245e370e41c120599c944f6d361caaf398e8217cbd0001a68a4d7b0ba2f" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.429559 4695 generic.go:334] "Generic (PLEG): container finished" podID="54dd4fa9-5754-4da9-852d-c6b0ccbcc258" containerID="be7b48c508a868d930fa00045d3f0c6b1b1780d35d972d52901120bd5a4982e6" exitCode=0 Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.429645 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" event={"ID":"54dd4fa9-5754-4da9-852d-c6b0ccbcc258","Type":"ContainerDied","Data":"be7b48c508a868d930fa00045d3f0c6b1b1780d35d972d52901120bd5a4982e6"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.429670 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" event={"ID":"54dd4fa9-5754-4da9-852d-c6b0ccbcc258","Type":"ContainerStarted","Data":"4b47c28a649de74e8c3a5b74f76751d8d947792cbb1962ce33f74c568d6a72fe"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.435825 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6480f507-2af8-48e4-bf5c-89e2783ec61e","Type":"ContainerStarted","Data":"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99"} Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.485555 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.573891 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae628a6f-5a57-4bef-83d3-c551bb4edfe1" (UID: "ae628a6f-5a57-4bef-83d3-c551bb4edfe1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.593755 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae628a6f-5a57-4bef-83d3-c551bb4edfe1" (UID: "ae628a6f-5a57-4bef-83d3-c551bb4edfe1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.648498 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.648817 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.708366 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-config" (OuterVolumeSpecName: "config") pod "ae628a6f-5a57-4bef-83d3-c551bb4edfe1" (UID: "ae628a6f-5a57-4bef-83d3-c551bb4edfe1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.725474 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae628a6f-5a57-4bef-83d3-c551bb4edfe1" (UID: "ae628a6f-5a57-4bef-83d3-c551bb4edfe1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.754664 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae628a6f-5a57-4bef-83d3-c551bb4edfe1" (UID: "ae628a6f-5a57-4bef-83d3-c551bb4edfe1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.754721 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.754751 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:05 crc kubenswrapper[4695]: I1126 13:45:05.857725 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae628a6f-5a57-4bef-83d3-c551bb4edfe1-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.101771 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-2w4b9"] Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.110542 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-2w4b9"] Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.364177 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bcbb85b97-9vnqg"] Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.492406 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95668c77b-9hn77" event={"ID":"90feb883-0869-4d2d-bce9-1678291bf72f","Type":"ContainerStarted","Data":"497203c429007bc3bf7d4001e7c388aca6a02f69c926cc99ce75c8552ba164f2"} Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.492657 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.497305 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7d4c9c9dbd-9bbnw" event={"ID":"3ca1545d-04c5-45f8-8738-f662db77ffba","Type":"ContainerStarted","Data":"b948d55de9d04890c6f53f664ccb60256b0b74f1eacf876c82cb301a56243685"} Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.503642 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76608c1-3f59-41e2-bca1-9f1821b94c8d","Type":"ContainerStarted","Data":"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26"} Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.516437 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-95668c77b-9hn77" podStartSLOduration=5.516408841 podStartE2EDuration="5.516408841s" podCreationTimestamp="2025-11-26 13:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:06.511476102 +0000 UTC m=+1290.147301184" watchObservedRunningTime="2025-11-26 13:45:06.516408841 +0000 UTC m=+1290.152233923" Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.523936 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" event={"ID":"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa","Type":"ContainerStarted","Data":"8723bdf51d6b30759747757c2b834fe5e23000cf9f7558577cc6198bdd269ec3"} Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.524719 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.536030 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4894b95b-8zpbh" event={"ID":"fda8b0d7-85f5-4274-a12e-a09982b9fe3c","Type":"ContainerStarted","Data":"7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33"} Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.536072 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4894b95b-8zpbh" event={"ID":"fda8b0d7-85f5-4274-a12e-a09982b9fe3c","Type":"ContainerStarted","Data":"208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46"} Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.537277 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7d4c9c9dbd-9bbnw" podStartSLOduration=24.972679002 podStartE2EDuration="25.537255532s" podCreationTimestamp="2025-11-26 13:44:41 +0000 UTC" firstStartedPulling="2025-11-26 13:45:03.534950013 +0000 UTC m=+1287.170775095" lastFinishedPulling="2025-11-26 13:45:04.099526543 +0000 UTC m=+1287.735351625" observedRunningTime="2025-11-26 13:45:06.537135358 +0000 UTC m=+1290.172960440" watchObservedRunningTime="2025-11-26 13:45:06.537255532 +0000 UTC m=+1290.173080624" Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.543506 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcbb85b97-9vnqg" event={"ID":"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606","Type":"ContainerStarted","Data":"0ed86267ffef584422d424baba2bf85e705b28e55852a47c67d00170800b6d09"} Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.568083 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" podStartSLOduration=5.568055204 podStartE2EDuration="5.568055204s" podCreationTimestamp="2025-11-26 13:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:06.561503773 +0000 UTC m=+1290.197328855" watchObservedRunningTime="2025-11-26 13:45:06.568055204 +0000 UTC m=+1290.203880286" Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.600478 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b4894b95b-8zpbh" podStartSLOduration=25.044828365 podStartE2EDuration="25.600413396s" podCreationTimestamp="2025-11-26 13:44:41 +0000 UTC" firstStartedPulling="2025-11-26 13:45:03.719082102 +0000 UTC m=+1287.354907184" lastFinishedPulling="2025-11-26 13:45:04.274667133 +0000 UTC m=+1287.910492215" observedRunningTime="2025-11-26 13:45:06.588413599 +0000 UTC m=+1290.224238681" watchObservedRunningTime="2025-11-26 13:45:06.600413396 +0000 UTC m=+1290.236238478" Nov 26 13:45:06 crc kubenswrapper[4695]: I1126 13:45:06.994018 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.103021 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-config-volume\") pod \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.103728 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-secret-volume\") pod \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.103758 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwg99\" (UniqueName: \"kubernetes.io/projected/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-kube-api-access-cwg99\") pod \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\" (UID: \"54dd4fa9-5754-4da9-852d-c6b0ccbcc258\") " Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.105897 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-config-volume" (OuterVolumeSpecName: "config-volume") pod "54dd4fa9-5754-4da9-852d-c6b0ccbcc258" (UID: "54dd4fa9-5754-4da9-852d-c6b0ccbcc258"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.117894 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "54dd4fa9-5754-4da9-852d-c6b0ccbcc258" (UID: "54dd4fa9-5754-4da9-852d-c6b0ccbcc258"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.118308 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-kube-api-access-cwg99" (OuterVolumeSpecName: "kube-api-access-cwg99") pod "54dd4fa9-5754-4da9-852d-c6b0ccbcc258" (UID: "54dd4fa9-5754-4da9-852d-c6b0ccbcc258"). InnerVolumeSpecName "kube-api-access-cwg99". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.178682 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae628a6f-5a57-4bef-83d3-c551bb4edfe1" path="/var/lib/kubelet/pods/ae628a6f-5a57-4bef-83d3-c551bb4edfe1/volumes" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.206853 4695 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.206889 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwg99\" (UniqueName: \"kubernetes.io/projected/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-kube-api-access-cwg99\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.206898 4695 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54dd4fa9-5754-4da9-852d-c6b0ccbcc258-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.558872 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcbb85b97-9vnqg" event={"ID":"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606","Type":"ContainerStarted","Data":"877b6d02c10f39131fcf5733f162cfc6e71d2ad126051b49e1b549ae64234a91"} Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.559498 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bcbb85b97-9vnqg" event={"ID":"9c6bc4a2-d4c9-4bdc-a576-03bf4101b606","Type":"ContainerStarted","Data":"17e6bec739008a36bd7a692ad6a8979f70c26820183681a135e7c219b2d6cced"} Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.559869 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.568128 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" event={"ID":"54dd4fa9-5754-4da9-852d-c6b0ccbcc258","Type":"ContainerDied","Data":"4b47c28a649de74e8c3a5b74f76751d8d947792cbb1962ce33f74c568d6a72fe"} Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.568187 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b47c28a649de74e8c3a5b74f76751d8d947792cbb1962ce33f74c568d6a72fe" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.568275 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.578544 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-bcbb85b97-9vnqg" podStartSLOduration=2.578528553 podStartE2EDuration="2.578528553s" podCreationTimestamp="2025-11-26 13:45:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:07.576842118 +0000 UTC m=+1291.212667200" watchObservedRunningTime="2025-11-26 13:45:07.578528553 +0000 UTC m=+1291.214353635" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.580635 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6480f507-2af8-48e4-bf5c-89e2783ec61e","Type":"ContainerStarted","Data":"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530"} Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.580847 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-log" containerID="cri-o://97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99" gracePeriod=30 Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.581326 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-httpd" containerID="cri-o://5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530" gracePeriod=30 Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.591783 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-log" containerID="cri-o://d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26" gracePeriod=30 Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.591946 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-httpd" containerID="cri-o://32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8" gracePeriod=30 Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.591975 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76608c1-3f59-41e2-bca1-9f1821b94c8d","Type":"ContainerStarted","Data":"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8"} Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.617643 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=25.617623262 podStartE2EDuration="25.617623262s" podCreationTimestamp="2025-11-26 13:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:07.615551674 +0000 UTC m=+1291.251376756" watchObservedRunningTime="2025-11-26 13:45:07.617623262 +0000 UTC m=+1291.253448344" Nov 26 13:45:07 crc kubenswrapper[4695]: I1126 13:45:07.641704 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=25.641679116 podStartE2EDuration="25.641679116s" podCreationTimestamp="2025-11-26 13:44:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:07.638829364 +0000 UTC m=+1291.274654446" watchObservedRunningTime="2025-11-26 13:45:07.641679116 +0000 UTC m=+1291.277504198" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.173091 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.237534 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-config-data\") pod \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.237618 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-logs\") pod \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.237637 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-scripts\") pod \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.237782 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.237827 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2255\" (UniqueName: \"kubernetes.io/projected/f76608c1-3f59-41e2-bca1-9f1821b94c8d-kube-api-access-m2255\") pod \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.237870 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-httpd-run\") pod \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.237962 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-combined-ca-bundle\") pod \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\" (UID: \"f76608c1-3f59-41e2-bca1-9f1821b94c8d\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.238957 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f76608c1-3f59-41e2-bca1-9f1821b94c8d" (UID: "f76608c1-3f59-41e2-bca1-9f1821b94c8d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.239435 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-logs" (OuterVolumeSpecName: "logs") pod "f76608c1-3f59-41e2-bca1-9f1821b94c8d" (UID: "f76608c1-3f59-41e2-bca1-9f1821b94c8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.239901 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.239913 4695 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f76608c1-3f59-41e2-bca1-9f1821b94c8d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.245437 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "f76608c1-3f59-41e2-bca1-9f1821b94c8d" (UID: "f76608c1-3f59-41e2-bca1-9f1821b94c8d"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.245665 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-scripts" (OuterVolumeSpecName: "scripts") pod "f76608c1-3f59-41e2-bca1-9f1821b94c8d" (UID: "f76608c1-3f59-41e2-bca1-9f1821b94c8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.260626 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76608c1-3f59-41e2-bca1-9f1821b94c8d-kube-api-access-m2255" (OuterVolumeSpecName: "kube-api-access-m2255") pod "f76608c1-3f59-41e2-bca1-9f1821b94c8d" (UID: "f76608c1-3f59-41e2-bca1-9f1821b94c8d"). InnerVolumeSpecName "kube-api-access-m2255". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.296091 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-config-data" (OuterVolumeSpecName: "config-data") pod "f76608c1-3f59-41e2-bca1-9f1821b94c8d" (UID: "f76608c1-3f59-41e2-bca1-9f1821b94c8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.328569 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f76608c1-3f59-41e2-bca1-9f1821b94c8d" (UID: "f76608c1-3f59-41e2-bca1-9f1821b94c8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.341630 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2255\" (UniqueName: \"kubernetes.io/projected/f76608c1-3f59-41e2-bca1-9f1821b94c8d-kube-api-access-m2255\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.341674 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.341691 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.341704 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76608c1-3f59-41e2-bca1-9f1821b94c8d-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.341761 4695 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.345599 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.369598 4695 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.443716 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-scripts\") pod \"6480f507-2af8-48e4-bf5c-89e2783ec61e\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.443810 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"6480f507-2af8-48e4-bf5c-89e2783ec61e\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.443890 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-combined-ca-bundle\") pod \"6480f507-2af8-48e4-bf5c-89e2783ec61e\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.443981 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-logs\") pod \"6480f507-2af8-48e4-bf5c-89e2783ec61e\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.444105 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9rb2\" (UniqueName: \"kubernetes.io/projected/6480f507-2af8-48e4-bf5c-89e2783ec61e-kube-api-access-z9rb2\") pod \"6480f507-2af8-48e4-bf5c-89e2783ec61e\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.444202 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-config-data\") pod \"6480f507-2af8-48e4-bf5c-89e2783ec61e\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.444302 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-httpd-run\") pod \"6480f507-2af8-48e4-bf5c-89e2783ec61e\" (UID: \"6480f507-2af8-48e4-bf5c-89e2783ec61e\") " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.445849 4695 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.446411 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6480f507-2af8-48e4-bf5c-89e2783ec61e" (UID: "6480f507-2af8-48e4-bf5c-89e2783ec61e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.446653 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-logs" (OuterVolumeSpecName: "logs") pod "6480f507-2af8-48e4-bf5c-89e2783ec61e" (UID: "6480f507-2af8-48e4-bf5c-89e2783ec61e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.450823 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-scripts" (OuterVolumeSpecName: "scripts") pod "6480f507-2af8-48e4-bf5c-89e2783ec61e" (UID: "6480f507-2af8-48e4-bf5c-89e2783ec61e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.458703 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6480f507-2af8-48e4-bf5c-89e2783ec61e-kube-api-access-z9rb2" (OuterVolumeSpecName: "kube-api-access-z9rb2") pod "6480f507-2af8-48e4-bf5c-89e2783ec61e" (UID: "6480f507-2af8-48e4-bf5c-89e2783ec61e"). InnerVolumeSpecName "kube-api-access-z9rb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.458850 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "6480f507-2af8-48e4-bf5c-89e2783ec61e" (UID: "6480f507-2af8-48e4-bf5c-89e2783ec61e"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.480695 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6480f507-2af8-48e4-bf5c-89e2783ec61e" (UID: "6480f507-2af8-48e4-bf5c-89e2783ec61e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.524741 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-config-data" (OuterVolumeSpecName: "config-data") pod "6480f507-2af8-48e4-bf5c-89e2783ec61e" (UID: "6480f507-2af8-48e4-bf5c-89e2783ec61e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.547708 4695 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.548114 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.548148 4695 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.548158 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.548174 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6480f507-2af8-48e4-bf5c-89e2783ec61e-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.548186 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9rb2\" (UniqueName: \"kubernetes.io/projected/6480f507-2af8-48e4-bf5c-89e2783ec61e-kube-api-access-z9rb2\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.548198 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6480f507-2af8-48e4-bf5c-89e2783ec61e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.571421 4695 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.604740 4695 generic.go:334] "Generic (PLEG): container finished" podID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerID="5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530" exitCode=0 Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.605039 4695 generic.go:334] "Generic (PLEG): container finished" podID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerID="97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99" exitCode=143 Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.604833 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.604818 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6480f507-2af8-48e4-bf5c-89e2783ec61e","Type":"ContainerDied","Data":"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530"} Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.605463 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6480f507-2af8-48e4-bf5c-89e2783ec61e","Type":"ContainerDied","Data":"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99"} Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.605504 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6480f507-2af8-48e4-bf5c-89e2783ec61e","Type":"ContainerDied","Data":"6e8a02585c0ec98fbf8f4eb42abf3a560ea2437f3206ca53c9a8495883a43e31"} Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.605533 4695 scope.go:117] "RemoveContainer" containerID="5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.612098 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.612155 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76608c1-3f59-41e2-bca1-9f1821b94c8d","Type":"ContainerDied","Data":"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8"} Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.611987 4695 generic.go:334] "Generic (PLEG): container finished" podID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerID="32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8" exitCode=143 Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.617329 4695 generic.go:334] "Generic (PLEG): container finished" podID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerID="d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26" exitCode=143 Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.617386 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76608c1-3f59-41e2-bca1-9f1821b94c8d","Type":"ContainerDied","Data":"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26"} Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.617430 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f76608c1-3f59-41e2-bca1-9f1821b94c8d","Type":"ContainerDied","Data":"9ffebbe0d2bb08bf421948915a7f033249dc1287566be54454b240401eec4adc"} Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.650266 4695 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.720499 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.744267 4695 scope.go:117] "RemoveContainer" containerID="97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.751806 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.767634 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.775785 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.785546 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.786052 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-log" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786069 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-log" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.786083 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-httpd" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786093 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-httpd" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.786115 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54dd4fa9-5754-4da9-852d-c6b0ccbcc258" containerName="collect-profiles" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786123 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="54dd4fa9-5754-4da9-852d-c6b0ccbcc258" containerName="collect-profiles" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.786137 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-httpd" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786145 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-httpd" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.786162 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-log" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786169 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-log" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786424 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="54dd4fa9-5754-4da9-852d-c6b0ccbcc258" containerName="collect-profiles" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786443 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-log" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786466 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" containerName="glance-httpd" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786486 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-log" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.786501 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" containerName="glance-httpd" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.788411 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.791156 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.791482 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.792450 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-gwvqk" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.792678 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.801932 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.803621 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.807594 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.807843 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.818957 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.836661 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.840824 4695 scope.go:117] "RemoveContainer" containerID="5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.841582 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530\": container with ID starting with 5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530 not found: ID does not exist" containerID="5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.841622 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530"} err="failed to get container status \"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530\": rpc error: code = NotFound desc = could not find container \"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530\": container with ID starting with 5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.841655 4695 scope.go:117] "RemoveContainer" containerID="97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.842401 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99\": container with ID starting with 97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99 not found: ID does not exist" containerID="97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.842491 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99"} err="failed to get container status \"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99\": rpc error: code = NotFound desc = could not find container \"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99\": container with ID starting with 97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.842518 4695 scope.go:117] "RemoveContainer" containerID="5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.845726 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530"} err="failed to get container status \"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530\": rpc error: code = NotFound desc = could not find container \"5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530\": container with ID starting with 5f39e9afed0a2d9322675531c4fcb0277dc6f093af57b39927e9539096013530 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.845768 4695 scope.go:117] "RemoveContainer" containerID="97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.848242 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99"} err="failed to get container status \"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99\": rpc error: code = NotFound desc = could not find container \"97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99\": container with ID starting with 97220e3fdff08e8bfd740eac4b61621b09702bc854c1503773433b6d6ea29d99 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.848450 4695 scope.go:117] "RemoveContainer" containerID="32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.853250 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.853524 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.853979 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.854096 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.854202 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.854411 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-logs\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.854550 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.854674 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.854838 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.854964 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvkt\" (UniqueName: \"kubernetes.io/projected/560d175c-207c-4842-bbc0-64852bc173d6-kube-api-access-xkvkt\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.855586 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.855779 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.855963 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.856078 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-logs\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.859137 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5hcq\" (UniqueName: \"kubernetes.io/projected/7bc51300-b52c-4dc8-b337-1b7a15539971-kube-api-access-v5hcq\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.859676 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.889687 4695 scope.go:117] "RemoveContainer" containerID="d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.911803 4695 scope.go:117] "RemoveContainer" containerID="32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.912262 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8\": container with ID starting with 32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8 not found: ID does not exist" containerID="32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.912293 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8"} err="failed to get container status \"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8\": rpc error: code = NotFound desc = could not find container \"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8\": container with ID starting with 32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.912313 4695 scope.go:117] "RemoveContainer" containerID="d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26" Nov 26 13:45:08 crc kubenswrapper[4695]: E1126 13:45:08.912926 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26\": container with ID starting with d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26 not found: ID does not exist" containerID="d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.912948 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26"} err="failed to get container status \"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26\": rpc error: code = NotFound desc = could not find container \"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26\": container with ID starting with d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.912961 4695 scope.go:117] "RemoveContainer" containerID="32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.913308 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8"} err="failed to get container status \"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8\": rpc error: code = NotFound desc = could not find container \"32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8\": container with ID starting with 32dc347453006d8a06391b5ead633b9adcb5b3343aac8224d1f790abbbf1dab8 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.913324 4695 scope.go:117] "RemoveContainer" containerID="d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.913610 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26"} err="failed to get container status \"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26\": rpc error: code = NotFound desc = could not find container \"d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26\": container with ID starting with d28fec4e2f521e526d00c439b5190853907ae79e4966793abe47029950a56c26 not found: ID does not exist" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960628 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-logs\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960674 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960701 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960732 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960756 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkvkt\" (UniqueName: \"kubernetes.io/projected/560d175c-207c-4842-bbc0-64852bc173d6-kube-api-access-xkvkt\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960774 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960793 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960823 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960842 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-logs\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960865 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5hcq\" (UniqueName: \"kubernetes.io/projected/7bc51300-b52c-4dc8-b337-1b7a15539971-kube-api-access-v5hcq\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960881 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960906 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960927 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960948 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960962 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.960980 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.961171 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-logs\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.961473 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.961500 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-logs\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.961680 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.961687 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.961714 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.987543 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.992000 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-scripts\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.993991 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-scripts\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.995308 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:08 crc kubenswrapper[4695]: I1126 13:45:08.996069 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.003536 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5hcq\" (UniqueName: \"kubernetes.io/projected/7bc51300-b52c-4dc8-b337-1b7a15539971-kube-api-access-v5hcq\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.017042 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkvkt\" (UniqueName: \"kubernetes.io/projected/560d175c-207c-4842-bbc0-64852bc173d6-kube-api-access-xkvkt\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.017633 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.018455 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-config-data\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.018514 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-config-data\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.073372 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " pod="openstack/glance-default-external-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.121773 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.144972 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.162859 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.183531 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6480f507-2af8-48e4-bf5c-89e2783ec61e" path="/var/lib/kubelet/pods/6480f507-2af8-48e4-bf5c-89e2783ec61e/volumes" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.185114 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76608c1-3f59-41e2-bca1-9f1821b94c8d" path="/var/lib/kubelet/pods/f76608c1-3f59-41e2-bca1-9f1821b94c8d/volumes" Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.635980 4695 generic.go:334] "Generic (PLEG): container finished" podID="4e6d0467-d196-483d-a0af-c616fcffd987" containerID="1e5cb882573728f88557512418f95b54b847dccc4aa2f5b2ac0d528ee616bf5c" exitCode=0 Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.636069 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lf5p" event={"ID":"4e6d0467-d196-483d-a0af-c616fcffd987","Type":"ContainerDied","Data":"1e5cb882573728f88557512418f95b54b847dccc4aa2f5b2ac0d528ee616bf5c"} Nov 26 13:45:09 crc kubenswrapper[4695]: I1126 13:45:09.791940 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:45:10 crc kubenswrapper[4695]: I1126 13:45:10.651501 4695 generic.go:334] "Generic (PLEG): container finished" podID="74573dd4-c899-4229-b940-e2f82063aa84" containerID="d7964da8f13d02f415806519a7fa08d01150a9fc7b8a57d489bc022ef55b8fb6" exitCode=0 Nov 26 13:45:10 crc kubenswrapper[4695]: I1126 13:45:10.651972 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gs5xh" event={"ID":"74573dd4-c899-4229-b940-e2f82063aa84","Type":"ContainerDied","Data":"d7964da8f13d02f415806519a7fa08d01150a9fc7b8a57d489bc022ef55b8fb6"} Nov 26 13:45:10 crc kubenswrapper[4695]: I1126 13:45:10.761452 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:45:11 crc kubenswrapper[4695]: W1126 13:45:11.365853 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod560d175c_207c_4842_bbc0_64852bc173d6.slice/crio-095062ba25eae08a0bf5682e602ff2cab33e1302123e723d592ae3758dec3f95 WatchSource:0}: Error finding container 095062ba25eae08a0bf5682e602ff2cab33e1302123e723d592ae3758dec3f95: Status 404 returned error can't find the container with id 095062ba25eae08a0bf5682e602ff2cab33e1302123e723d592ae3758dec3f95 Nov 26 13:45:11 crc kubenswrapper[4695]: I1126 13:45:11.662452 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"560d175c-207c-4842-bbc0-64852bc173d6","Type":"ContainerStarted","Data":"095062ba25eae08a0bf5682e602ff2cab33e1302123e723d592ae3758dec3f95"} Nov 26 13:45:11 crc kubenswrapper[4695]: I1126 13:45:11.778904 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:45:11 crc kubenswrapper[4695]: I1126 13:45:11.778956 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:45:11 crc kubenswrapper[4695]: I1126 13:45:11.846707 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:45:11 crc kubenswrapper[4695]: I1126 13:45:11.848207 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.076500 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.139642 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-cbgr4"] Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.139902 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerName="dnsmasq-dns" containerID="cri-o://dfab0d4255935b5fe7d3beaac29565c701f8156baca086cb489167173f5f52df" gracePeriod=10 Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.667854 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.131:5353: connect: connection refused" Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.683277 4695 generic.go:334] "Generic (PLEG): container finished" podID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerID="dfab0d4255935b5fe7d3beaac29565c701f8156baca086cb489167173f5f52df" exitCode=0 Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.683560 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" event={"ID":"44541725-82b5-41bc-b51b-a3e624eb84e6","Type":"ContainerDied","Data":"dfab0d4255935b5fe7d3beaac29565c701f8156baca086cb489167173f5f52df"} Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.685680 4695 generic.go:334] "Generic (PLEG): container finished" podID="322aceb8-cfb2-478e-a586-68c3f43b3977" containerID="73bcfc8a9ba792371d225d90aaab498137bf2fa8860559fdb11e2e86166be2ab" exitCode=0 Nov 26 13:45:12 crc kubenswrapper[4695]: I1126 13:45:12.685757 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s5q69" event={"ID":"322aceb8-cfb2-478e-a586-68c3f43b3977","Type":"ContainerDied","Data":"73bcfc8a9ba792371d225d90aaab498137bf2fa8860559fdb11e2e86166be2ab"} Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.350983 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.471633 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s5q69" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.502928 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-config-data\") pod \"4e6d0467-d196-483d-a0af-c616fcffd987\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.502991 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-credential-keys\") pod \"4e6d0467-d196-483d-a0af-c616fcffd987\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.503037 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzvqm\" (UniqueName: \"kubernetes.io/projected/322aceb8-cfb2-478e-a586-68c3f43b3977-kube-api-access-mzvqm\") pod \"322aceb8-cfb2-478e-a586-68c3f43b3977\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.503097 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-fernet-keys\") pod \"4e6d0467-d196-483d-a0af-c616fcffd987\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.503128 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2762m\" (UniqueName: \"kubernetes.io/projected/4e6d0467-d196-483d-a0af-c616fcffd987-kube-api-access-2762m\") pod \"4e6d0467-d196-483d-a0af-c616fcffd987\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.503206 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-db-sync-config-data\") pod \"322aceb8-cfb2-478e-a586-68c3f43b3977\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.503278 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-combined-ca-bundle\") pod \"322aceb8-cfb2-478e-a586-68c3f43b3977\" (UID: \"322aceb8-cfb2-478e-a586-68c3f43b3977\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.503316 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-combined-ca-bundle\") pod \"4e6d0467-d196-483d-a0af-c616fcffd987\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.503341 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-scripts\") pod \"4e6d0467-d196-483d-a0af-c616fcffd987\" (UID: \"4e6d0467-d196-483d-a0af-c616fcffd987\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.515637 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-scripts" (OuterVolumeSpecName: "scripts") pod "4e6d0467-d196-483d-a0af-c616fcffd987" (UID: "4e6d0467-d196-483d-a0af-c616fcffd987"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.538466 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4e6d0467-d196-483d-a0af-c616fcffd987" (UID: "4e6d0467-d196-483d-a0af-c616fcffd987"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.538642 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "322aceb8-cfb2-478e-a586-68c3f43b3977" (UID: "322aceb8-cfb2-478e-a586-68c3f43b3977"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.552980 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4e6d0467-d196-483d-a0af-c616fcffd987" (UID: "4e6d0467-d196-483d-a0af-c616fcffd987"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.553162 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322aceb8-cfb2-478e-a586-68c3f43b3977-kube-api-access-mzvqm" (OuterVolumeSpecName: "kube-api-access-mzvqm") pod "322aceb8-cfb2-478e-a586-68c3f43b3977" (UID: "322aceb8-cfb2-478e-a586-68c3f43b3977"). InnerVolumeSpecName "kube-api-access-mzvqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.553238 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e6d0467-d196-483d-a0af-c616fcffd987-kube-api-access-2762m" (OuterVolumeSpecName: "kube-api-access-2762m") pod "4e6d0467-d196-483d-a0af-c616fcffd987" (UID: "4e6d0467-d196-483d-a0af-c616fcffd987"). InnerVolumeSpecName "kube-api-access-2762m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.566459 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "322aceb8-cfb2-478e-a586-68c3f43b3977" (UID: "322aceb8-cfb2-478e-a586-68c3f43b3977"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.574954 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gs5xh" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.647223 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74573dd4-c899-4229-b940-e2f82063aa84-logs\") pod \"74573dd4-c899-4229-b940-e2f82063aa84\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.647307 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-scripts\") pod \"74573dd4-c899-4229-b940-e2f82063aa84\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.647684 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74573dd4-c899-4229-b940-e2f82063aa84-logs" (OuterVolumeSpecName: "logs") pod "74573dd4-c899-4229-b940-e2f82063aa84" (UID: "74573dd4-c899-4229-b940-e2f82063aa84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.650785 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-257z7\" (UniqueName: \"kubernetes.io/projected/74573dd4-c899-4229-b940-e2f82063aa84-kube-api-access-257z7\") pod \"74573dd4-c899-4229-b940-e2f82063aa84\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.650847 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-config-data\") pod \"74573dd4-c899-4229-b940-e2f82063aa84\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.650921 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-combined-ca-bundle\") pod \"74573dd4-c899-4229-b940-e2f82063aa84\" (UID: \"74573dd4-c899-4229-b940-e2f82063aa84\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651434 4695 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651450 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzvqm\" (UniqueName: \"kubernetes.io/projected/322aceb8-cfb2-478e-a586-68c3f43b3977-kube-api-access-mzvqm\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651485 4695 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651494 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74573dd4-c899-4229-b940-e2f82063aa84-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651504 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2762m\" (UniqueName: \"kubernetes.io/projected/4e6d0467-d196-483d-a0af-c616fcffd987-kube-api-access-2762m\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651512 4695 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651520 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/322aceb8-cfb2-478e-a586-68c3f43b3977-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.651529 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.653465 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-config-data" (OuterVolumeSpecName: "config-data") pod "4e6d0467-d196-483d-a0af-c616fcffd987" (UID: "4e6d0467-d196-483d-a0af-c616fcffd987"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.654035 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e6d0467-d196-483d-a0af-c616fcffd987" (UID: "4e6d0467-d196-483d-a0af-c616fcffd987"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.662992 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74573dd4-c899-4229-b940-e2f82063aa84-kube-api-access-257z7" (OuterVolumeSpecName: "kube-api-access-257z7") pod "74573dd4-c899-4229-b940-e2f82063aa84" (UID: "74573dd4-c899-4229-b940-e2f82063aa84"). InnerVolumeSpecName "kube-api-access-257z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.680035 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-scripts" (OuterVolumeSpecName: "scripts") pod "74573dd4-c899-4229-b940-e2f82063aa84" (UID: "74573dd4-c899-4229-b940-e2f82063aa84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.685796 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74573dd4-c899-4229-b940-e2f82063aa84" (UID: "74573dd4-c899-4229-b940-e2f82063aa84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.721897 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-config-data" (OuterVolumeSpecName: "config-data") pod "74573dd4-c899-4229-b940-e2f82063aa84" (UID: "74573dd4-c899-4229-b940-e2f82063aa84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.722001 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.742983 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5lf5p" event={"ID":"4e6d0467-d196-483d-a0af-c616fcffd987","Type":"ContainerDied","Data":"f2e48c08121d0de5226cc3abb106028e00f9f767a400c6c6d7a562e1fb509cce"} Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.743034 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2e48c08121d0de5226cc3abb106028e00f9f767a400c6c6d7a562e1fb509cce" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.743075 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5lf5p" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.749259 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-gs5xh" event={"ID":"74573dd4-c899-4229-b940-e2f82063aa84","Type":"ContainerDied","Data":"230c1528b71f7d2bd0e16ba2bc0802afb040bf851a63466b97bc8726c749f966"} Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.749304 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="230c1528b71f7d2bd0e16ba2bc0802afb040bf851a63466b97bc8726c749f966" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.749422 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-gs5xh" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752159 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-config\") pod \"44541725-82b5-41bc-b51b-a3e624eb84e6\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752203 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-sb\") pod \"44541725-82b5-41bc-b51b-a3e624eb84e6\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752239 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzclk\" (UniqueName: \"kubernetes.io/projected/44541725-82b5-41bc-b51b-a3e624eb84e6-kube-api-access-xzclk\") pod \"44541725-82b5-41bc-b51b-a3e624eb84e6\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752381 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-svc\") pod \"44541725-82b5-41bc-b51b-a3e624eb84e6\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752405 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-nb\") pod \"44541725-82b5-41bc-b51b-a3e624eb84e6\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752442 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-swift-storage-0\") pod \"44541725-82b5-41bc-b51b-a3e624eb84e6\" (UID: \"44541725-82b5-41bc-b51b-a3e624eb84e6\") " Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752808 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752820 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752829 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752839 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-257z7\" (UniqueName: \"kubernetes.io/projected/74573dd4-c899-4229-b940-e2f82063aa84-kube-api-access-257z7\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752850 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74573dd4-c899-4229-b940-e2f82063aa84-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.752858 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e6d0467-d196-483d-a0af-c616fcffd987-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.760217 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44541725-82b5-41bc-b51b-a3e624eb84e6-kube-api-access-xzclk" (OuterVolumeSpecName: "kube-api-access-xzclk") pod "44541725-82b5-41bc-b51b-a3e624eb84e6" (UID: "44541725-82b5-41bc-b51b-a3e624eb84e6"). InnerVolumeSpecName "kube-api-access-xzclk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.826835 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-s5q69" event={"ID":"322aceb8-cfb2-478e-a586-68c3f43b3977","Type":"ContainerDied","Data":"7e1e3528076b283371bec4557b74cced485da51bad6d97876e0a902e5b6e4210"} Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.826905 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1e3528076b283371bec4557b74cced485da51bad6d97876e0a902e5b6e4210" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.826993 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-s5q69" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.850617 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7bc51300-b52c-4dc8-b337-1b7a15539971","Type":"ContainerStarted","Data":"b0240d38006870c9ac6cb645a05a689c6f706ae6c931a2aa2cf994dd321f8de8"} Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.857734 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzclk\" (UniqueName: \"kubernetes.io/projected/44541725-82b5-41bc-b51b-a3e624eb84e6-kube-api-access-xzclk\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.896314 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "44541725-82b5-41bc-b51b-a3e624eb84e6" (UID: "44541725-82b5-41bc-b51b-a3e624eb84e6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.929689 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44541725-82b5-41bc-b51b-a3e624eb84e6" (UID: "44541725-82b5-41bc-b51b-a3e624eb84e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.958857 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-config" (OuterVolumeSpecName: "config") pod "44541725-82b5-41bc-b51b-a3e624eb84e6" (UID: "44541725-82b5-41bc-b51b-a3e624eb84e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.960353 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.978513 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.978765 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:14 crc kubenswrapper[4695]: I1126 13:45:14.987102 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44541725-82b5-41bc-b51b-a3e624eb84e6" (UID: "44541725-82b5-41bc-b51b-a3e624eb84e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.036101 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44541725-82b5-41bc-b51b-a3e624eb84e6" (UID: "44541725-82b5-41bc-b51b-a3e624eb84e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.079920 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.079951 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44541725-82b5-41bc-b51b-a3e624eb84e6-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.167663 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56f5fb9ccb-sctd5"] Nov 26 13:45:15 crc kubenswrapper[4695]: E1126 13:45:15.168167 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322aceb8-cfb2-478e-a586-68c3f43b3977" containerName="barbican-db-sync" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168192 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="322aceb8-cfb2-478e-a586-68c3f43b3977" containerName="barbican-db-sync" Nov 26 13:45:15 crc kubenswrapper[4695]: E1126 13:45:15.168225 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerName="init" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168233 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerName="init" Nov 26 13:45:15 crc kubenswrapper[4695]: E1126 13:45:15.168258 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e6d0467-d196-483d-a0af-c616fcffd987" containerName="keystone-bootstrap" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168267 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e6d0467-d196-483d-a0af-c616fcffd987" containerName="keystone-bootstrap" Nov 26 13:45:15 crc kubenswrapper[4695]: E1126 13:45:15.168278 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerName="dnsmasq-dns" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168286 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerName="dnsmasq-dns" Nov 26 13:45:15 crc kubenswrapper[4695]: E1126 13:45:15.168295 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74573dd4-c899-4229-b940-e2f82063aa84" containerName="placement-db-sync" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168302 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="74573dd4-c899-4229-b940-e2f82063aa84" containerName="placement-db-sync" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168522 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="74573dd4-c899-4229-b940-e2f82063aa84" containerName="placement-db-sync" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168550 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="322aceb8-cfb2-478e-a586-68c3f43b3977" containerName="barbican-db-sync" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168574 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e6d0467-d196-483d-a0af-c616fcffd987" containerName="keystone-bootstrap" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.168596 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" containerName="dnsmasq-dns" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.178595 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.184533 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.184763 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.184881 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-b6jfc" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.223729 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7dc8f95cf-vpnct"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.234961 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.247427 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56f5fb9ccb-sctd5"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.247577 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.263429 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dc8f95cf-vpnct"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.273632 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-4xcpb"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.279737 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.289968 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4qvq\" (UniqueName: \"kubernetes.io/projected/26c9568b-95d0-4b6d-8bed-6da941279a98-kube-api-access-x4qvq\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290038 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-config-data\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290088 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-config-data\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290124 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-config-data-custom\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290167 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp4ft\" (UniqueName: \"kubernetes.io/projected/71ec1963-a024-4fc4-a747-3c2ee03603a4-kube-api-access-fp4ft\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290194 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-combined-ca-bundle\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290217 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26c9568b-95d0-4b6d-8bed-6da941279a98-logs\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290326 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-config-data-custom\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.290354 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-combined-ca-bundle\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.291788 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ec1963-a024-4fc4-a747-3c2ee03603a4-logs\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.306546 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-4xcpb"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393266 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-svc\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393347 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-config-data-custom\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393391 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-combined-ca-bundle\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393425 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gbfq\" (UniqueName: \"kubernetes.io/projected/529f964c-d6e6-45c0-8024-14586a1984d3-kube-api-access-2gbfq\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393471 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ec1963-a024-4fc4-a747-3c2ee03603a4-logs\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393491 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4qvq\" (UniqueName: \"kubernetes.io/projected/26c9568b-95d0-4b6d-8bed-6da941279a98-kube-api-access-x4qvq\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393512 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-config-data\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393535 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-config\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393561 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-config-data\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393585 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-config-data-custom\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393604 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393624 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393648 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp4ft\" (UniqueName: \"kubernetes.io/projected/71ec1963-a024-4fc4-a747-3c2ee03603a4-kube-api-access-fp4ft\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393670 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-combined-ca-bundle\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393695 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26c9568b-95d0-4b6d-8bed-6da941279a98-logs\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.393719 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.402850 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/71ec1963-a024-4fc4-a747-3c2ee03603a4-logs\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.404537 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26c9568b-95d0-4b6d-8bed-6da941279a98-logs\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.405795 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-config-data-custom\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.410619 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-config-data-custom\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.411678 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5f6db6dcf4-jklzx"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.413731 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.420174 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-config-data\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.426804 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-config-data\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.426848 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26c9568b-95d0-4b6d-8bed-6da941279a98-combined-ca-bundle\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.427314 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.436240 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp4ft\" (UniqueName: \"kubernetes.io/projected/71ec1963-a024-4fc4-a747-3c2ee03603a4-kube-api-access-fp4ft\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.438922 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71ec1963-a024-4fc4-a747-3c2ee03603a4-combined-ca-bundle\") pod \"barbican-worker-7dc8f95cf-vpnct\" (UID: \"71ec1963-a024-4fc4-a747-3c2ee03603a4\") " pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.454111 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4qvq\" (UniqueName: \"kubernetes.io/projected/26c9568b-95d0-4b6d-8bed-6da941279a98-kube-api-access-x4qvq\") pod \"barbican-keystone-listener-56f5fb9ccb-sctd5\" (UID: \"26c9568b-95d0-4b6d-8bed-6da941279a98\") " pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.454180 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f6db6dcf4-jklzx"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.494784 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.494908 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.494936 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.494981 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpl8r\" (UniqueName: \"kubernetes.io/projected/0ec068e9-365b-4a45-a4c3-44c29ad4f494-kube-api-access-tpl8r\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.495019 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.495057 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-svc\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.495093 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-combined-ca-bundle\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.495136 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gbfq\" (UniqueName: \"kubernetes.io/projected/529f964c-d6e6-45c0-8024-14586a1984d3-kube-api-access-2gbfq\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.495180 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec068e9-365b-4a45-a4c3-44c29ad4f494-logs\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.495254 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-config\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.495286 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data-custom\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.496525 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-svc\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.497065 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.499689 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-config\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.508189 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.497575 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.515951 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gbfq\" (UniqueName: \"kubernetes.io/projected/529f964c-d6e6-45c0-8024-14586a1984d3-kube-api-access-2gbfq\") pod \"dnsmasq-dns-688c87cc99-4xcpb\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.525012 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.580020 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-776844bc66-7hpvs"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.581250 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.588807 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.589105 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-55hqz" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.589259 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.589388 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.589500 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.589610 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.599024 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpl8r\" (UniqueName: \"kubernetes.io/projected/0ec068e9-365b-4a45-a4c3-44c29ad4f494-kube-api-access-tpl8r\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.599131 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-combined-ca-bundle\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.599209 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec068e9-365b-4a45-a4c3-44c29ad4f494-logs\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.599259 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data-custom\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.599290 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.600238 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec068e9-365b-4a45-a4c3-44c29ad4f494-logs\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.604128 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data-custom\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.606933 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.611845 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-776844bc66-7hpvs"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.624022 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-combined-ca-bundle\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.630126 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpl8r\" (UniqueName: \"kubernetes.io/projected/0ec068e9-365b-4a45-a4c3-44c29ad4f494-kube-api-access-tpl8r\") pod \"barbican-api-5f6db6dcf4-jklzx\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.669149 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.692778 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dc8f95cf-vpnct" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.702406 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-combined-ca-bundle\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.702456 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-config-data\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.702541 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-internal-tls-certs\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.703015 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.703167 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-credential-keys\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.703576 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-public-tls-certs\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.703712 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-fernet-keys\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.703759 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rfvm\" (UniqueName: \"kubernetes.io/projected/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-kube-api-access-6rfvm\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.703816 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-scripts\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.806435 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-public-tls-certs\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.806865 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-fernet-keys\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.806891 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rfvm\" (UniqueName: \"kubernetes.io/projected/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-kube-api-access-6rfvm\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.806932 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-scripts\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.806983 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-combined-ca-bundle\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.807009 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-config-data\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.807037 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-internal-tls-certs\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.807085 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-credential-keys\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.819080 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-config-data\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.823698 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-scripts\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.826764 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-public-tls-certs\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.829012 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-internal-tls-certs\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.830033 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-fernet-keys\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.832336 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-combined-ca-bundle\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.856221 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rfvm\" (UniqueName: \"kubernetes.io/projected/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-kube-api-access-6rfvm\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.872013 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64-credential-keys\") pod \"keystone-776844bc66-7hpvs\" (UID: \"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64\") " pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.885063 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-666cd5b87b-cmnl9"] Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.897898 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.904649 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.904886 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-r6llz" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.905029 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.905231 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.905692 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.953957 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerStarted","Data":"fac0d571d617f64fb6d366a0824fa478f6fa1ee32f3aa43679f8fdfe841a383a"} Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.956171 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"560d175c-207c-4842-bbc0-64852bc173d6","Type":"ContainerStarted","Data":"d7814041a442373424a893c2909448005d29f818124b28d7bdfdd3db02375560"} Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.959232 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.960881 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-cbgr4" event={"ID":"44541725-82b5-41bc-b51b-a3e624eb84e6","Type":"ContainerDied","Data":"1bec86f26310ed0f055aa92bbef2344f16f6c1aa8d93f8725975c9ba0ac24b47"} Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.960932 4695 scope.go:117] "RemoveContainer" containerID="dfab0d4255935b5fe7d3beaac29565c701f8156baca086cb489167173f5f52df" Nov 26 13:45:15 crc kubenswrapper[4695]: I1126 13:45:15.962389 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-666cd5b87b-cmnl9"] Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.024711 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.067506 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-cbgr4"] Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.087967 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-cbgr4"] Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.128172 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2959a379-6a03-4c8d-b022-47e69ac7636d-logs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.128283 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-config-data\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.128420 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-combined-ca-bundle\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.128693 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-internal-tls-certs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.134256 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxn7z\" (UniqueName: \"kubernetes.io/projected/2959a379-6a03-4c8d-b022-47e69ac7636d-kube-api-access-hxn7z\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.134519 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-scripts\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.134586 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-public-tls-certs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.162238 4695 scope.go:117] "RemoveContainer" containerID="82fc2f69b0f8016224509d5c9d82124946758e724e94d4ea827d344d29b80400" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.184928 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56f5fb9ccb-sctd5"] Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.253359 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-scripts\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.253688 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-public-tls-certs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.253737 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2959a379-6a03-4c8d-b022-47e69ac7636d-logs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.253765 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-config-data\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.253811 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-combined-ca-bundle\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.253833 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-internal-tls-certs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.253890 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxn7z\" (UniqueName: \"kubernetes.io/projected/2959a379-6a03-4c8d-b022-47e69ac7636d-kube-api-access-hxn7z\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.258118 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2959a379-6a03-4c8d-b022-47e69ac7636d-logs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.262854 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-public-tls-certs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.263150 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-internal-tls-certs\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.263588 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-scripts\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.264180 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-config-data\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.271021 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2959a379-6a03-4c8d-b022-47e69ac7636d-combined-ca-bundle\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.275107 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxn7z\" (UniqueName: \"kubernetes.io/projected/2959a379-6a03-4c8d-b022-47e69ac7636d-kube-api-access-hxn7z\") pod \"placement-666cd5b87b-cmnl9\" (UID: \"2959a379-6a03-4c8d-b022-47e69ac7636d\") " pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.460445 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-4xcpb"] Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.488276 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5f6db6dcf4-jklzx"] Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.564596 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.782214 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dc8f95cf-vpnct"] Nov 26 13:45:16 crc kubenswrapper[4695]: I1126 13:45:16.933683 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-776844bc66-7hpvs"] Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.052717 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-776844bc66-7hpvs" event={"ID":"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64","Type":"ContainerStarted","Data":"9c41d7f77091dd6b88d439cb4d9fcb2d6025e1ed2faca7b952e2463f367ec8e4"} Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.074301 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" event={"ID":"26c9568b-95d0-4b6d-8bed-6da941279a98","Type":"ContainerStarted","Data":"87f3041e88925d8e192e7e72a658626515d21d10705ba8d5de7d1e09c4848431"} Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.078677 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" event={"ID":"529f964c-d6e6-45c0-8024-14586a1984d3","Type":"ContainerStarted","Data":"1f6c9a037e25347fbfc7b562b0edea66b7fd023e33b67dad11c60aef1297b598"} Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.098614 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7bc51300-b52c-4dc8-b337-1b7a15539971","Type":"ContainerStarted","Data":"13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b"} Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.138587 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dc8f95cf-vpnct" event={"ID":"71ec1963-a024-4fc4-a747-3c2ee03603a4","Type":"ContainerStarted","Data":"331bf536fb82e4229f6dab7e7bde0c44566bee1fd287fb0fe71c8f15b11a6bf7"} Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.165608 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6db6dcf4-jklzx" event={"ID":"0ec068e9-365b-4a45-a4c3-44c29ad4f494","Type":"ContainerStarted","Data":"05c6e626fa5935d1b896b63bf5c31df1b74e1ddb71ed30886ac406140acc7ccb"} Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.213026 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44541725-82b5-41bc-b51b-a3e624eb84e6" path="/var/lib/kubelet/pods/44541725-82b5-41bc-b51b-a3e624eb84e6/volumes" Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.213716 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6db6dcf4-jklzx" event={"ID":"0ec068e9-365b-4a45-a4c3-44c29ad4f494","Type":"ContainerStarted","Data":"076ca830c9c2550469a60d3df6de405ae460202d625adbc6121b3ae9c4dca099"} Nov 26 13:45:17 crc kubenswrapper[4695]: I1126 13:45:17.213747 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-666cd5b87b-cmnl9"] Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.186290 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6db6dcf4-jklzx" event={"ID":"0ec068e9-365b-4a45-a4c3-44c29ad4f494","Type":"ContainerStarted","Data":"64a968868aad6889df3e739d2798d9580829607e89b2692c7f05a465591f5d83"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.186974 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.186999 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.195511 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-776844bc66-7hpvs" event={"ID":"4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64","Type":"ContainerStarted","Data":"9411eeb52b9e1258ba831849b20ec53f9f9bc8f90d6560ad55b5691b4625bc90"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.196288 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.205080 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zzx52" event={"ID":"b6597360-8ab5-4bba-9137-fb4f57019c78","Type":"ContainerStarted","Data":"662f56bbccfddd0f417d9f1c5903b27aee718f2a3f903fe268393d3edee6598e"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.217972 4695 generic.go:334] "Generic (PLEG): container finished" podID="529f964c-d6e6-45c0-8024-14586a1984d3" containerID="19559fcc840d1f923589d27c98cce2c34082bf6cbefc8949390a69bab512bab5" exitCode=0 Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.218055 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" event={"ID":"529f964c-d6e6-45c0-8024-14586a1984d3","Type":"ContainerDied","Data":"19559fcc840d1f923589d27c98cce2c34082bf6cbefc8949390a69bab512bab5"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.241304 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7bc51300-b52c-4dc8-b337-1b7a15539971","Type":"ContainerStarted","Data":"24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.261070 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5f6db6dcf4-jklzx" podStartSLOduration=3.261050645 podStartE2EDuration="3.261050645s" podCreationTimestamp="2025-11-26 13:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:18.214978912 +0000 UTC m=+1301.850803994" watchObservedRunningTime="2025-11-26 13:45:18.261050645 +0000 UTC m=+1301.896875727" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.292659 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-666cd5b87b-cmnl9" event={"ID":"2959a379-6a03-4c8d-b022-47e69ac7636d","Type":"ContainerStarted","Data":"1b1d92884e94f6c0a9d6393a41ff19bdfe3e437f0e698f7f120d6a18e9f27062"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.292708 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-666cd5b87b-cmnl9" event={"ID":"2959a379-6a03-4c8d-b022-47e69ac7636d","Type":"ContainerStarted","Data":"93d4498aaf98bec1dd36c1885b3bcb382dedd96bdd89fa422d08b6255eb772d3"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.292722 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-666cd5b87b-cmnl9" event={"ID":"2959a379-6a03-4c8d-b022-47e69ac7636d","Type":"ContainerStarted","Data":"ea7ef74a88942985d6cf7f110ad6b3997a59d0c4211b24ded180d1d1a5d16eb5"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.293543 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.293577 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.300380 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-zzx52" podStartSLOduration=4.500010079 podStartE2EDuration="47.300338902s" podCreationTimestamp="2025-11-26 13:44:31 +0000 UTC" firstStartedPulling="2025-11-26 13:44:33.276709665 +0000 UTC m=+1256.912534747" lastFinishedPulling="2025-11-26 13:45:16.077038488 +0000 UTC m=+1299.712863570" observedRunningTime="2025-11-26 13:45:18.280415444 +0000 UTC m=+1301.916240526" watchObservedRunningTime="2025-11-26 13:45:18.300338902 +0000 UTC m=+1301.936163984" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.300974 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-776844bc66-7hpvs" podStartSLOduration=3.300968982 podStartE2EDuration="3.300968982s" podCreationTimestamp="2025-11-26 13:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:18.24994572 +0000 UTC m=+1301.885770802" watchObservedRunningTime="2025-11-26 13:45:18.300968982 +0000 UTC m=+1301.936794064" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.307910 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"560d175c-207c-4842-bbc0-64852bc173d6","Type":"ContainerStarted","Data":"a6447666e61865216259c7f6f5d4e32a29d90e7d887dd0c31dbb838eea65a003"} Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.328047 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=10.328028517 podStartE2EDuration="10.328028517s" podCreationTimestamp="2025-11-26 13:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:18.322919954 +0000 UTC m=+1301.958745046" watchObservedRunningTime="2025-11-26 13:45:18.328028517 +0000 UTC m=+1301.963853599" Nov 26 13:45:18 crc kubenswrapper[4695]: I1126 13:45:18.394097 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-666cd5b87b-cmnl9" podStartSLOduration=3.394078318 podStartE2EDuration="3.394078318s" podCreationTimestamp="2025-11-26 13:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:18.384597106 +0000 UTC m=+1302.020422188" watchObservedRunningTime="2025-11-26 13:45:18.394078318 +0000 UTC m=+1302.029903400" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.149331 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.149798 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.210630 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.210670 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.219390 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.222613 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.222832 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.248107 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.251198 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.251176320999999 podStartE2EDuration="11.251176321s" podCreationTimestamp="2025-11-26 13:45:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:18.414050447 +0000 UTC m=+1302.049875529" watchObservedRunningTime="2025-11-26 13:45:19.251176321 +0000 UTC m=+1302.887001413" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.330659 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" event={"ID":"529f964c-d6e6-45c0-8024-14586a1984d3","Type":"ContainerStarted","Data":"a4963866846b59f88109bca1c7068b75c99db2c3f6265ef905e6c3adf5532026"} Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.332492 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.332532 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.332546 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.332574 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.332666 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.360224 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-766c89cd-88742"] Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.361865 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.366765 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.367099 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.383695 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-public-tls-certs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.383787 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-config-data-custom\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.383810 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-internal-tls-certs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.383860 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-config-data\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.383877 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c9406e4-9d5c-429a-94b4-6da4283c3462-logs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.383902 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fcgz4\" (UniqueName: \"kubernetes.io/projected/4c9406e4-9d5c-429a-94b4-6da4283c3462-kube-api-access-fcgz4\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.383945 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-combined-ca-bundle\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.402081 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-766c89cd-88742"] Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.405971 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" podStartSLOduration=4.40594817 podStartE2EDuration="4.40594817s" podCreationTimestamp="2025-11-26 13:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:19.360131735 +0000 UTC m=+1302.995956807" watchObservedRunningTime="2025-11-26 13:45:19.40594817 +0000 UTC m=+1303.041773262" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.485012 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-config-data-custom\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.485104 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-internal-tls-certs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.485256 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-config-data\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.485288 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c9406e4-9d5c-429a-94b4-6da4283c3462-logs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.485322 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fcgz4\" (UniqueName: \"kubernetes.io/projected/4c9406e4-9d5c-429a-94b4-6da4283c3462-kube-api-access-fcgz4\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.485526 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-combined-ca-bundle\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.485601 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-public-tls-certs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.486205 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c9406e4-9d5c-429a-94b4-6da4283c3462-logs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.494599 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-internal-tls-certs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.495564 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-config-data-custom\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.504698 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-combined-ca-bundle\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.507823 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-public-tls-certs\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.509779 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c9406e4-9d5c-429a-94b4-6da4283c3462-config-data\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.516267 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fcgz4\" (UniqueName: \"kubernetes.io/projected/4c9406e4-9d5c-429a-94b4-6da4283c3462-kube-api-access-fcgz4\") pod \"barbican-api-766c89cd-88742\" (UID: \"4c9406e4-9d5c-429a-94b4-6da4283c3462\") " pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:19 crc kubenswrapper[4695]: I1126 13:45:19.707421 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:20 crc kubenswrapper[4695]: I1126 13:45:20.748763 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-766c89cd-88742"] Nov 26 13:45:20 crc kubenswrapper[4695]: W1126 13:45:20.764073 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c9406e4_9d5c_429a_94b4_6da4283c3462.slice/crio-dcaefb16d39861dce792125fd0475a464c881841cc25603ccf2aafcf138419fa WatchSource:0}: Error finding container dcaefb16d39861dce792125fd0475a464c881841cc25603ccf2aafcf138419fa: Status 404 returned error can't find the container with id dcaefb16d39861dce792125fd0475a464c881841cc25603ccf2aafcf138419fa Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.382045 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" event={"ID":"26c9568b-95d0-4b6d-8bed-6da941279a98","Type":"ContainerStarted","Data":"92f0f498f3b939966a4d06447f491f04daebae0687bd7a5fc6b5e8223b24ecbc"} Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.382376 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" event={"ID":"26c9568b-95d0-4b6d-8bed-6da941279a98","Type":"ContainerStarted","Data":"69d55ac921267328f68976a34a94ba7cf3c9de504e6e8f6ff1a05a91f6320bf9"} Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.387040 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dc8f95cf-vpnct" event={"ID":"71ec1963-a024-4fc4-a747-3c2ee03603a4","Type":"ContainerStarted","Data":"bee2a3cb42f6902d9880c83d316aaf55921f4b71a23711dcffc403580296202d"} Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.387083 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dc8f95cf-vpnct" event={"ID":"71ec1963-a024-4fc4-a747-3c2ee03603a4","Type":"ContainerStarted","Data":"1b17a1150bb91ff0316a33b4206da7a9dbe8130f387d96bd829502b0d6aa7fc8"} Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.390864 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766c89cd-88742" event={"ID":"4c9406e4-9d5c-429a-94b4-6da4283c3462","Type":"ContainerStarted","Data":"f8d491ed9fb89a001975d7e167516ce2054b39145e5a41281c3617b02c399446"} Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.390951 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766c89cd-88742" event={"ID":"4c9406e4-9d5c-429a-94b4-6da4283c3462","Type":"ContainerStarted","Data":"b41d0c49dde33163038b49acae4363f0242c68b1ae84cd6d8651e2556f96f213"} Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.390964 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-766c89cd-88742" event={"ID":"4c9406e4-9d5c-429a-94b4-6da4283c3462","Type":"ContainerStarted","Data":"dcaefb16d39861dce792125fd0475a464c881841cc25603ccf2aafcf138419fa"} Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.391833 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.391966 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.426721 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56f5fb9ccb-sctd5" podStartSLOduration=2.467078442 podStartE2EDuration="6.426701418s" podCreationTimestamp="2025-11-26 13:45:15 +0000 UTC" firstStartedPulling="2025-11-26 13:45:16.251232368 +0000 UTC m=+1299.887057450" lastFinishedPulling="2025-11-26 13:45:20.210855344 +0000 UTC m=+1303.846680426" observedRunningTime="2025-11-26 13:45:21.407751412 +0000 UTC m=+1305.043576494" watchObservedRunningTime="2025-11-26 13:45:21.426701418 +0000 UTC m=+1305.062526500" Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.461014 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-766c89cd-88742" podStartSLOduration=2.460991864 podStartE2EDuration="2.460991864s" podCreationTimestamp="2025-11-26 13:45:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:21.458450723 +0000 UTC m=+1305.094275815" watchObservedRunningTime="2025-11-26 13:45:21.460991864 +0000 UTC m=+1305.096816946" Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.470262 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7dc8f95cf-vpnct" podStartSLOduration=3.070154813 podStartE2EDuration="6.47024373s" podCreationTimestamp="2025-11-26 13:45:15 +0000 UTC" firstStartedPulling="2025-11-26 13:45:16.810692415 +0000 UTC m=+1300.446517497" lastFinishedPulling="2025-11-26 13:45:20.210781332 +0000 UTC m=+1303.846606414" observedRunningTime="2025-11-26 13:45:21.433121563 +0000 UTC m=+1305.068946645" watchObservedRunningTime="2025-11-26 13:45:21.47024373 +0000 UTC m=+1305.106068812" Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.781826 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b4894b95b-8zpbh" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 26 13:45:21 crc kubenswrapper[4695]: I1126 13:45:21.848953 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d4c9c9dbd-9bbnw" podUID="3ca1545d-04c5-45f8-8738-f662db77ffba" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 26 13:45:22 crc kubenswrapper[4695]: I1126 13:45:22.957311 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:23 crc kubenswrapper[4695]: I1126 13:45:23.507132 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 13:45:24 crc kubenswrapper[4695]: I1126 13:45:24.423680 4695 generic.go:334] "Generic (PLEG): container finished" podID="b6597360-8ab5-4bba-9137-fb4f57019c78" containerID="662f56bbccfddd0f417d9f1c5903b27aee718f2a3f903fe268393d3edee6598e" exitCode=0 Nov 26 13:45:24 crc kubenswrapper[4695]: I1126 13:45:24.423771 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zzx52" event={"ID":"b6597360-8ab5-4bba-9137-fb4f57019c78","Type":"ContainerDied","Data":"662f56bbccfddd0f417d9f1c5903b27aee718f2a3f903fe268393d3edee6598e"} Nov 26 13:45:25 crc kubenswrapper[4695]: I1126 13:45:25.403379 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 13:45:25 crc kubenswrapper[4695]: I1126 13:45:25.671484 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:25 crc kubenswrapper[4695]: I1126 13:45:25.739948 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9k8dw"] Nov 26 13:45:25 crc kubenswrapper[4695]: I1126 13:45:25.740263 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerName="dnsmasq-dns" containerID="cri-o://8723bdf51d6b30759747757c2b834fe5e23000cf9f7558577cc6198bdd269ec3" gracePeriod=10 Nov 26 13:45:26 crc kubenswrapper[4695]: I1126 13:45:26.464275 4695 generic.go:334] "Generic (PLEG): container finished" podID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerID="8723bdf51d6b30759747757c2b834fe5e23000cf9f7558577cc6198bdd269ec3" exitCode=0 Nov 26 13:45:26 crc kubenswrapper[4695]: I1126 13:45:26.464325 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" event={"ID":"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa","Type":"ContainerDied","Data":"8723bdf51d6b30759747757c2b834fe5e23000cf9f7558577cc6198bdd269ec3"} Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.075756 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.364195 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.832002 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zzx52" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.844929 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.868295 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6597360-8ab5-4bba-9137-fb4f57019c78-etc-machine-id\") pod \"b6597360-8ab5-4bba-9137-fb4f57019c78\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.868385 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-scripts\") pod \"b6597360-8ab5-4bba-9137-fb4f57019c78\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.868501 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-db-sync-config-data\") pod \"b6597360-8ab5-4bba-9137-fb4f57019c78\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.868523 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-config-data\") pod \"b6597360-8ab5-4bba-9137-fb4f57019c78\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.868576 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-combined-ca-bundle\") pod \"b6597360-8ab5-4bba-9137-fb4f57019c78\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.868637 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2hcjz\" (UniqueName: \"kubernetes.io/projected/b6597360-8ab5-4bba-9137-fb4f57019c78-kube-api-access-2hcjz\") pod \"b6597360-8ab5-4bba-9137-fb4f57019c78\" (UID: \"b6597360-8ab5-4bba-9137-fb4f57019c78\") " Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.886062 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6597360-8ab5-4bba-9137-fb4f57019c78-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b6597360-8ab5-4bba-9137-fb4f57019c78" (UID: "b6597360-8ab5-4bba-9137-fb4f57019c78"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.888606 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6597360-8ab5-4bba-9137-fb4f57019c78-kube-api-access-2hcjz" (OuterVolumeSpecName: "kube-api-access-2hcjz") pod "b6597360-8ab5-4bba-9137-fb4f57019c78" (UID: "b6597360-8ab5-4bba-9137-fb4f57019c78"). InnerVolumeSpecName "kube-api-access-2hcjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.890160 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b6597360-8ab5-4bba-9137-fb4f57019c78" (UID: "b6597360-8ab5-4bba-9137-fb4f57019c78"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.915279 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-scripts" (OuterVolumeSpecName: "scripts") pod "b6597360-8ab5-4bba-9137-fb4f57019c78" (UID: "b6597360-8ab5-4bba-9137-fb4f57019c78"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.962602 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6597360-8ab5-4bba-9137-fb4f57019c78" (UID: "b6597360-8ab5-4bba-9137-fb4f57019c78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.969951 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-config-data" (OuterVolumeSpecName: "config-data") pod "b6597360-8ab5-4bba-9137-fb4f57019c78" (UID: "b6597360-8ab5-4bba-9137-fb4f57019c78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.970414 4695 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.970449 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.970460 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.970469 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2hcjz\" (UniqueName: \"kubernetes.io/projected/b6597360-8ab5-4bba-9137-fb4f57019c78-kube-api-access-2hcjz\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.970480 4695 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b6597360-8ab5-4bba-9137-fb4f57019c78-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:27 crc kubenswrapper[4695]: I1126 13:45:27.970488 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b6597360-8ab5-4bba-9137-fb4f57019c78-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:28 crc kubenswrapper[4695]: I1126 13:45:28.483443 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-zzx52" event={"ID":"b6597360-8ab5-4bba-9137-fb4f57019c78","Type":"ContainerDied","Data":"152033dfc0fd407e7789e447ee263dbbacf760515bd6620df59d63078eeafdc7"} Nov 26 13:45:28 crc kubenswrapper[4695]: I1126 13:45:28.483478 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="152033dfc0fd407e7789e447ee263dbbacf760515bd6620df59d63078eeafdc7" Nov 26 13:45:28 crc kubenswrapper[4695]: I1126 13:45:28.483534 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-zzx52" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.140764 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:29 crc kubenswrapper[4695]: E1126 13:45:29.141668 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6597360-8ab5-4bba-9137-fb4f57019c78" containerName="cinder-db-sync" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.141687 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6597360-8ab5-4bba-9137-fb4f57019c78" containerName="cinder-db-sync" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.141920 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6597360-8ab5-4bba-9137-fb4f57019c78" containerName="cinder-db-sync" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.143557 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.146185 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.146625 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.146763 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.146915 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-t6qg9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.154982 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.156158 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.221437 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.221514 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466cbcd6-2cb2-4451-b921-c20d81843e39-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.221653 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.221678 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-scripts\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.221735 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.221783 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxxrk\" (UniqueName: \"kubernetes.io/projected/466cbcd6-2cb2-4451-b921-c20d81843e39-kube-api-access-lxxrk\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.292307 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-t9qc9"] Nov 26 13:45:29 crc kubenswrapper[4695]: E1126 13:45:29.292776 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerName="init" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.292791 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerName="init" Nov 26 13:45:29 crc kubenswrapper[4695]: E1126 13:45:29.292838 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerName="dnsmasq-dns" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.292846 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerName="dnsmasq-dns" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.293051 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" containerName="dnsmasq-dns" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.294074 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.323890 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-svc\") pod \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.323999 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8k8l\" (UniqueName: \"kubernetes.io/projected/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-kube-api-access-w8k8l\") pod \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324033 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-config\") pod \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324071 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-nb\") pod \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324120 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-sb\") pod \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324137 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-swift-storage-0\") pod \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\" (UID: \"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa\") " Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324433 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324464 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466cbcd6-2cb2-4451-b921-c20d81843e39-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324535 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-scripts\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324550 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324576 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.324593 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxxrk\" (UniqueName: \"kubernetes.io/projected/466cbcd6-2cb2-4451-b921-c20d81843e39-kube-api-access-lxxrk\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.337077 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466cbcd6-2cb2-4451-b921-c20d81843e39-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.342952 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.343708 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.345869 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.355173 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-scripts\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.355433 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-t9qc9"] Nov 26 13:45:29 crc kubenswrapper[4695]: E1126 13:45:29.361810 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.371700 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-kube-api-access-w8k8l" (OuterVolumeSpecName: "kube-api-access-w8k8l") pod "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" (UID: "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa"). InnerVolumeSpecName "kube-api-access-w8k8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.390543 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxxrk\" (UniqueName: \"kubernetes.io/projected/466cbcd6-2cb2-4451-b921-c20d81843e39-kube-api-access-lxxrk\") pod \"cinder-scheduler-0\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.426763 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g29zv\" (UniqueName: \"kubernetes.io/projected/335e87e5-67d2-44c6-9f2d-ac0424b243c2-kube-api-access-g29zv\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.426852 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-config\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.426931 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.426952 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.426973 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.427010 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.427063 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8k8l\" (UniqueName: \"kubernetes.io/projected/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-kube-api-access-w8k8l\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.442409 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.443956 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.451754 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.460310 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.463133 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" (UID: "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.469785 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" (UID: "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.477175 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-config" (OuterVolumeSpecName: "config") pod "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" (UID: "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.484584 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" (UID: "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.496036 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" (UID: "012219ed-72d2-4f2e-a51e-1f2a77b0e8aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.503091 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.520653 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" event={"ID":"012219ed-72d2-4f2e-a51e-1f2a77b0e8aa","Type":"ContainerDied","Data":"4e02fe4b3dafbf31e4828b96526099709be611acf4896d80184d6225f976f433"} Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.520718 4695 scope.go:117] "RemoveContainer" containerID="8723bdf51d6b30759747757c2b834fe5e23000cf9f7558577cc6198bdd269ec3" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.520920 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-9k8dw" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528359 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528415 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528434 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw9t9\" (UniqueName: \"kubernetes.io/projected/d7f6344e-8260-49b1-af60-a37747486529-kube-api-access-bw9t9\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528456 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528471 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7f6344e-8260-49b1-af60-a37747486529-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528493 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528515 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f6344e-8260-49b1-af60-a37747486529-logs\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528541 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528554 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data-custom\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528578 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-scripts\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528602 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528678 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g29zv\" (UniqueName: \"kubernetes.io/projected/335e87e5-67d2-44c6-9f2d-ac0424b243c2-kube-api-access-g29zv\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528731 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-config\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528787 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528798 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528807 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528817 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.528826 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.529603 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-config\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.529931 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.530132 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.530574 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.531139 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.551842 4695 scope.go:117] "RemoveContainer" containerID="5d8f328cef2470d66242640f44a0eec0be992c9749cde6cdbff1c4820e45d41e" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.552063 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerStarted","Data":"763c4ea11875ca37063fb3414fe71f359de9950b00d095684c5b11da62c8ef2a"} Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.553612 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="ceilometer-notification-agent" containerID="cri-o://d1d5dd367b065a5db2b4a6fd92d8ffbd7ec67e7670b62d10af5efbb6e945d944" gracePeriod=30 Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.553731 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.554286 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="proxy-httpd" containerID="cri-o://763c4ea11875ca37063fb3414fe71f359de9950b00d095684c5b11da62c8ef2a" gracePeriod=30 Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.554390 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="sg-core" containerID="cri-o://fac0d571d617f64fb6d366a0824fa478f6fa1ee32f3aa43679f8fdfe841a383a" gracePeriod=30 Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.561484 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g29zv\" (UniqueName: \"kubernetes.io/projected/335e87e5-67d2-44c6-9f2d-ac0424b243c2-kube-api-access-g29zv\") pod \"dnsmasq-dns-6bb4fc677f-t9qc9\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.617472 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9k8dw"] Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.628001 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-9k8dw"] Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.630091 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.630143 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw9t9\" (UniqueName: \"kubernetes.io/projected/d7f6344e-8260-49b1-af60-a37747486529-kube-api-access-bw9t9\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.630165 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7f6344e-8260-49b1-af60-a37747486529-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.630190 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f6344e-8260-49b1-af60-a37747486529-logs\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.630217 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data-custom\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.630236 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-scripts\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.630252 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.633428 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.633634 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.633842 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f6344e-8260-49b1-af60-a37747486529-logs\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.633880 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7f6344e-8260-49b1-af60-a37747486529-etc-machine-id\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.636493 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-scripts\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.641317 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data-custom\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.648413 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.660756 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw9t9\" (UniqueName: \"kubernetes.io/projected/d7f6344e-8260-49b1-af60-a37747486529-kube-api-access-bw9t9\") pod \"cinder-api-0\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " pod="openstack/cinder-api-0" Nov 26 13:45:29 crc kubenswrapper[4695]: I1126 13:45:29.781834 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.226609 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:30 crc kubenswrapper[4695]: W1126 13:45:30.279109 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466cbcd6_2cb2_4451_b921_c20d81843e39.slice/crio-e77319275efd3fd0f10a3a97c644be64ed84c22a75bdb4171cad4ef85f5dc660 WatchSource:0}: Error finding container e77319275efd3fd0f10a3a97c644be64ed84c22a75bdb4171cad4ef85f5dc660: Status 404 returned error can't find the container with id e77319275efd3fd0f10a3a97c644be64ed84c22a75bdb4171cad4ef85f5dc660 Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.428318 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-t9qc9"] Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.589308 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.593098 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" event={"ID":"335e87e5-67d2-44c6-9f2d-ac0424b243c2","Type":"ContainerStarted","Data":"756cea872baccfd50870df17816061998ae12225874ba772a09f2903aa32535f"} Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.603638 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"466cbcd6-2cb2-4451-b921-c20d81843e39","Type":"ContainerStarted","Data":"e77319275efd3fd0f10a3a97c644be64ed84c22a75bdb4171cad4ef85f5dc660"} Nov 26 13:45:30 crc kubenswrapper[4695]: W1126 13:45:30.604903 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7f6344e_8260_49b1_af60_a37747486529.slice/crio-c7efac4415fbe46ea101c66174b2b42ad27a70435e2f0ad319ebc02a43caf652 WatchSource:0}: Error finding container c7efac4415fbe46ea101c66174b2b42ad27a70435e2f0ad319ebc02a43caf652: Status 404 returned error can't find the container with id c7efac4415fbe46ea101c66174b2b42ad27a70435e2f0ad319ebc02a43caf652 Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.607015 4695 generic.go:334] "Generic (PLEG): container finished" podID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerID="763c4ea11875ca37063fb3414fe71f359de9950b00d095684c5b11da62c8ef2a" exitCode=0 Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.607039 4695 generic.go:334] "Generic (PLEG): container finished" podID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerID="fac0d571d617f64fb6d366a0824fa478f6fa1ee32f3aa43679f8fdfe841a383a" exitCode=2 Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.607047 4695 generic.go:334] "Generic (PLEG): container finished" podID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerID="d1d5dd367b065a5db2b4a6fd92d8ffbd7ec67e7670b62d10af5efbb6e945d944" exitCode=0 Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.607061 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerDied","Data":"763c4ea11875ca37063fb3414fe71f359de9950b00d095684c5b11da62c8ef2a"} Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.607079 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerDied","Data":"fac0d571d617f64fb6d366a0824fa478f6fa1ee32f3aa43679f8fdfe841a383a"} Nov 26 13:45:30 crc kubenswrapper[4695]: I1126 13:45:30.607090 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerDied","Data":"d1d5dd367b065a5db2b4a6fd92d8ffbd7ec67e7670b62d10af5efbb6e945d944"} Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.015960 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.069921 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-sg-core-conf-yaml\") pod \"a9c800b1-62f2-42d6-a64c-95a673861ebb\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.070028 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-config-data\") pod \"a9c800b1-62f2-42d6-a64c-95a673861ebb\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.070082 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-combined-ca-bundle\") pod \"a9c800b1-62f2-42d6-a64c-95a673861ebb\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.070128 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-scripts\") pod \"a9c800b1-62f2-42d6-a64c-95a673861ebb\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.070168 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-run-httpd\") pod \"a9c800b1-62f2-42d6-a64c-95a673861ebb\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.070189 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-log-httpd\") pod \"a9c800b1-62f2-42d6-a64c-95a673861ebb\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.070230 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grlfh\" (UniqueName: \"kubernetes.io/projected/a9c800b1-62f2-42d6-a64c-95a673861ebb-kube-api-access-grlfh\") pod \"a9c800b1-62f2-42d6-a64c-95a673861ebb\" (UID: \"a9c800b1-62f2-42d6-a64c-95a673861ebb\") " Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.071205 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a9c800b1-62f2-42d6-a64c-95a673861ebb" (UID: "a9c800b1-62f2-42d6-a64c-95a673861ebb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.101197 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a9c800b1-62f2-42d6-a64c-95a673861ebb" (UID: "a9c800b1-62f2-42d6-a64c-95a673861ebb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.110526 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9c800b1-62f2-42d6-a64c-95a673861ebb-kube-api-access-grlfh" (OuterVolumeSpecName: "kube-api-access-grlfh") pod "a9c800b1-62f2-42d6-a64c-95a673861ebb" (UID: "a9c800b1-62f2-42d6-a64c-95a673861ebb"). InnerVolumeSpecName "kube-api-access-grlfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.111494 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-scripts" (OuterVolumeSpecName: "scripts") pod "a9c800b1-62f2-42d6-a64c-95a673861ebb" (UID: "a9c800b1-62f2-42d6-a64c-95a673861ebb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.127097 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a9c800b1-62f2-42d6-a64c-95a673861ebb" (UID: "a9c800b1-62f2-42d6-a64c-95a673861ebb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.155450 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9c800b1-62f2-42d6-a64c-95a673861ebb" (UID: "a9c800b1-62f2-42d6-a64c-95a673861ebb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.173380 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grlfh\" (UniqueName: \"kubernetes.io/projected/a9c800b1-62f2-42d6-a64c-95a673861ebb-kube-api-access-grlfh\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.173405 4695 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.173415 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.173423 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.173433 4695 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.173442 4695 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9c800b1-62f2-42d6-a64c-95a673861ebb-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.174845 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="012219ed-72d2-4f2e-a51e-1f2a77b0e8aa" path="/var/lib/kubelet/pods/012219ed-72d2-4f2e-a51e-1f2a77b0e8aa/volumes" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.308479 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-config-data" (OuterVolumeSpecName: "config-data") pod "a9c800b1-62f2-42d6-a64c-95a673861ebb" (UID: "a9c800b1-62f2-42d6-a64c-95a673861ebb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.382052 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9c800b1-62f2-42d6-a64c-95a673861ebb-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.613876 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.685837 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7f6344e-8260-49b1-af60-a37747486529","Type":"ContainerStarted","Data":"562b4e0156c8a3cb6087ec24630c65fbfba793c2a1ebaa6cd5974eb5571b7d9d"} Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.685890 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7f6344e-8260-49b1-af60-a37747486529","Type":"ContainerStarted","Data":"c7efac4415fbe46ea101c66174b2b42ad27a70435e2f0ad319ebc02a43caf652"} Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.721451 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9c800b1-62f2-42d6-a64c-95a673861ebb","Type":"ContainerDied","Data":"7f0a155d6e85e737fbf2ea58705d50b91ff69d8dde56dd2018afb77868426445"} Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.721503 4695 scope.go:117] "RemoveContainer" containerID="763c4ea11875ca37063fb3414fe71f359de9950b00d095684c5b11da62c8ef2a" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.721647 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.727598 4695 generic.go:334] "Generic (PLEG): container finished" podID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerID="2406831eb658b9a8acbb2140c8669084e9482778c78d63022b1a10130d512fbf" exitCode=0 Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.727909 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" event={"ID":"335e87e5-67d2-44c6-9f2d-ac0424b243c2","Type":"ContainerDied","Data":"2406831eb658b9a8acbb2140c8669084e9482778c78d63022b1a10130d512fbf"} Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.780662 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.850974 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7d4c9c9dbd-9bbnw" podUID="3ca1545d-04c5-45f8-8738-f662db77ffba" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.144:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.144:8443: connect: connection refused" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.953638 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.976634 4695 scope.go:117] "RemoveContainer" containerID="fac0d571d617f64fb6d366a0824fa478f6fa1ee32f3aa43679f8fdfe841a383a" Nov 26 13:45:31 crc kubenswrapper[4695]: I1126 13:45:31.978412 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:31.998389 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:32 crc kubenswrapper[4695]: E1126 13:45:31.998802 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="ceilometer-notification-agent" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:31.998816 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="ceilometer-notification-agent" Nov 26 13:45:32 crc kubenswrapper[4695]: E1126 13:45:31.998843 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="proxy-httpd" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:31.998849 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="proxy-httpd" Nov 26 13:45:32 crc kubenswrapper[4695]: E1126 13:45:31.998861 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="sg-core" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:31.998868 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="sg-core" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:31.999048 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="sg-core" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:31.999064 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="proxy-httpd" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:31.999074 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" containerName="ceilometer-notification-agent" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.000974 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.004218 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.008419 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.036813 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.098134 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.098190 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-scripts\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.098214 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-config-data\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.098241 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pc4x\" (UniqueName: \"kubernetes.io/projected/795192f7-3108-457f-b912-bd47f356881b-kube-api-access-9pc4x\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.098360 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-log-httpd\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.098392 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-run-httpd\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.098422 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.103045 4695 scope.go:117] "RemoveContainer" containerID="d1d5dd367b065a5db2b4a6fd92d8ffbd7ec67e7670b62d10af5efbb6e945d944" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.200818 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-log-httpd\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.200891 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-run-httpd\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.200921 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.201104 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.201152 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-scripts\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.201176 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-config-data\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.201219 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pc4x\" (UniqueName: \"kubernetes.io/projected/795192f7-3108-457f-b912-bd47f356881b-kube-api-access-9pc4x\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.202132 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-log-httpd\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.202469 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-run-httpd\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.209171 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.211176 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.216178 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-scripts\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.217671 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-config-data\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.221590 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pc4x\" (UniqueName: \"kubernetes.io/projected/795192f7-3108-457f-b912-bd47f356881b-kube-api-access-9pc4x\") pod \"ceilometer-0\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.283827 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.335750 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.569703 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-766c89cd-88742" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.627048 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f6db6dcf4-jklzx"] Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.627389 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f6db6dcf4-jklzx" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api-log" containerID="cri-o://05c6e626fa5935d1b896b63bf5c31df1b74e1ddb71ed30886ac406140acc7ccb" gracePeriod=30 Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.627848 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5f6db6dcf4-jklzx" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api" containerID="cri-o://64a968868aad6889df3e739d2798d9580829607e89b2692c7f05a465591f5d83" gracePeriod=30 Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.778451 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" event={"ID":"335e87e5-67d2-44c6-9f2d-ac0424b243c2","Type":"ContainerStarted","Data":"309eb20150d86724473e65a0064a99e3a991edf6b38d888239c7d5a8476d0657"} Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.779732 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.795001 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api-log" containerID="cri-o://562b4e0156c8a3cb6087ec24630c65fbfba793c2a1ebaa6cd5974eb5571b7d9d" gracePeriod=30 Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.795285 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7f6344e-8260-49b1-af60-a37747486529","Type":"ContainerStarted","Data":"0aee747e74c12cdfc35dbc8f4f260250b54689d530cf1dd81bc49f1f55f3f381"} Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.795322 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.795361 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api" containerID="cri-o://0aee747e74c12cdfc35dbc8f4f260250b54689d530cf1dd81bc49f1f55f3f381" gracePeriod=30 Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.817893 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" podStartSLOduration=3.817873267 podStartE2EDuration="3.817873267s" podCreationTimestamp="2025-11-26 13:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:32.804766428 +0000 UTC m=+1316.440591510" watchObservedRunningTime="2025-11-26 13:45:32.817873267 +0000 UTC m=+1316.453698369" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.837751 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.837733012 podStartE2EDuration="3.837733012s" podCreationTimestamp="2025-11-26 13:45:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:32.824210719 +0000 UTC m=+1316.460035801" watchObservedRunningTime="2025-11-26 13:45:32.837733012 +0000 UTC m=+1316.473558094" Nov 26 13:45:32 crc kubenswrapper[4695]: I1126 13:45:32.904127 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.186771 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9c800b1-62f2-42d6-a64c-95a673861ebb" path="/var/lib/kubelet/pods/a9c800b1-62f2-42d6-a64c-95a673861ebb/volumes" Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.807007 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerStarted","Data":"58acf2c9bf59cfcc675fe457ecbcad2bb64bbf575a59d0f4c43a2c908f05a188"} Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.809628 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"466cbcd6-2cb2-4451-b921-c20d81843e39","Type":"ContainerStarted","Data":"5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0"} Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.809675 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"466cbcd6-2cb2-4451-b921-c20d81843e39","Type":"ContainerStarted","Data":"e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1"} Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.815853 4695 generic.go:334] "Generic (PLEG): container finished" podID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerID="05c6e626fa5935d1b896b63bf5c31df1b74e1ddb71ed30886ac406140acc7ccb" exitCode=143 Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.815916 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6db6dcf4-jklzx" event={"ID":"0ec068e9-365b-4a45-a4c3-44c29ad4f494","Type":"ContainerDied","Data":"05c6e626fa5935d1b896b63bf5c31df1b74e1ddb71ed30886ac406140acc7ccb"} Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.823484 4695 generic.go:334] "Generic (PLEG): container finished" podID="d7f6344e-8260-49b1-af60-a37747486529" containerID="562b4e0156c8a3cb6087ec24630c65fbfba793c2a1ebaa6cd5974eb5571b7d9d" exitCode=143 Nov 26 13:45:33 crc kubenswrapper[4695]: I1126 13:45:33.823586 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7f6344e-8260-49b1-af60-a37747486529","Type":"ContainerDied","Data":"562b4e0156c8a3cb6087ec24630c65fbfba793c2a1ebaa6cd5974eb5571b7d9d"} Nov 26 13:45:34 crc kubenswrapper[4695]: I1126 13:45:34.377635 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:45:34 crc kubenswrapper[4695]: I1126 13:45:34.404215 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.589468875 podStartE2EDuration="5.404193995s" podCreationTimestamp="2025-11-26 13:45:29 +0000 UTC" firstStartedPulling="2025-11-26 13:45:30.28858267 +0000 UTC m=+1313.924407752" lastFinishedPulling="2025-11-26 13:45:32.10330779 +0000 UTC m=+1315.739132872" observedRunningTime="2025-11-26 13:45:33.865729609 +0000 UTC m=+1317.501554701" watchObservedRunningTime="2025-11-26 13:45:34.404193995 +0000 UTC m=+1318.040019087" Nov 26 13:45:34 crc kubenswrapper[4695]: I1126 13:45:34.504622 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 26 13:45:34 crc kubenswrapper[4695]: I1126 13:45:34.853252 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerStarted","Data":"80244729973ae578a564421094b3dd96aba96fba15fc8986160e2d25489be772"} Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.507298 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-bcbb85b97-9vnqg" Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.569471 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-95668c77b-9hn77"] Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.569709 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-95668c77b-9hn77" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-api" containerID="cri-o://41778b070eb0d881071a8787448e9bbbda5f5c824a871136eaa5b506fe14849d" gracePeriod=30 Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.570169 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-95668c77b-9hn77" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-httpd" containerID="cri-o://497203c429007bc3bf7d4001e7c388aca6a02f69c926cc99ce75c8552ba164f2" gracePeriod=30 Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.796946 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f6db6dcf4-jklzx" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:51952->10.217.0.158:9311: read: connection reset by peer" Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.796969 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5f6db6dcf4-jklzx" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.158:9311/healthcheck\": read tcp 10.217.0.2:51958->10.217.0.158:9311: read: connection reset by peer" Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.866872 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerStarted","Data":"52d1dde3112b85dbc7cb6d291777c4fea589593a5e1794b5971d0dcc2c126161"} Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.866941 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerStarted","Data":"7c91559373ee4fcaeebba3ccde2ee95aadcbe054c4fab0517508475ab20d1f17"} Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.869573 4695 generic.go:334] "Generic (PLEG): container finished" podID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerID="64a968868aad6889df3e739d2798d9580829607e89b2692c7f05a465591f5d83" exitCode=0 Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.869634 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6db6dcf4-jklzx" event={"ID":"0ec068e9-365b-4a45-a4c3-44c29ad4f494","Type":"ContainerDied","Data":"64a968868aad6889df3e739d2798d9580829607e89b2692c7f05a465591f5d83"} Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.873558 4695 generic.go:334] "Generic (PLEG): container finished" podID="90feb883-0869-4d2d-bce9-1678291bf72f" containerID="497203c429007bc3bf7d4001e7c388aca6a02f69c926cc99ce75c8552ba164f2" exitCode=0 Nov 26 13:45:35 crc kubenswrapper[4695]: I1126 13:45:35.873630 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95668c77b-9hn77" event={"ID":"90feb883-0869-4d2d-bce9-1678291bf72f","Type":"ContainerDied","Data":"497203c429007bc3bf7d4001e7c388aca6a02f69c926cc99ce75c8552ba164f2"} Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.194904 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.310698 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec068e9-365b-4a45-a4c3-44c29ad4f494-logs\") pod \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.310788 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data\") pod \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.310885 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data-custom\") pod \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.310942 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-combined-ca-bundle\") pod \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.311015 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpl8r\" (UniqueName: \"kubernetes.io/projected/0ec068e9-365b-4a45-a4c3-44c29ad4f494-kube-api-access-tpl8r\") pod \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\" (UID: \"0ec068e9-365b-4a45-a4c3-44c29ad4f494\") " Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.311367 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ec068e9-365b-4a45-a4c3-44c29ad4f494-logs" (OuterVolumeSpecName: "logs") pod "0ec068e9-365b-4a45-a4c3-44c29ad4f494" (UID: "0ec068e9-365b-4a45-a4c3-44c29ad4f494"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.311760 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ec068e9-365b-4a45-a4c3-44c29ad4f494-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.320333 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0ec068e9-365b-4a45-a4c3-44c29ad4f494" (UID: "0ec068e9-365b-4a45-a4c3-44c29ad4f494"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.334506 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ec068e9-365b-4a45-a4c3-44c29ad4f494-kube-api-access-tpl8r" (OuterVolumeSpecName: "kube-api-access-tpl8r") pod "0ec068e9-365b-4a45-a4c3-44c29ad4f494" (UID: "0ec068e9-365b-4a45-a4c3-44c29ad4f494"). InnerVolumeSpecName "kube-api-access-tpl8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.340550 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ec068e9-365b-4a45-a4c3-44c29ad4f494" (UID: "0ec068e9-365b-4a45-a4c3-44c29ad4f494"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.367435 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data" (OuterVolumeSpecName: "config-data") pod "0ec068e9-365b-4a45-a4c3-44c29ad4f494" (UID: "0ec068e9-365b-4a45-a4c3-44c29ad4f494"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.396731 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.397039 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.414057 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.414104 4695 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.414116 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ec068e9-365b-4a45-a4c3-44c29ad4f494-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.414128 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpl8r\" (UniqueName: \"kubernetes.io/projected/0ec068e9-365b-4a45-a4c3-44c29ad4f494-kube-api-access-tpl8r\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.441719 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.883861 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5f6db6dcf4-jklzx" event={"ID":"0ec068e9-365b-4a45-a4c3-44c29ad4f494","Type":"ContainerDied","Data":"076ca830c9c2550469a60d3df6de405ae460202d625adbc6121b3ae9c4dca099"} Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.883927 4695 scope.go:117] "RemoveContainer" containerID="64a968868aad6889df3e739d2798d9580829607e89b2692c7f05a465591f5d83" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.883920 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5f6db6dcf4-jklzx" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.932781 4695 scope.go:117] "RemoveContainer" containerID="05c6e626fa5935d1b896b63bf5c31df1b74e1ddb71ed30886ac406140acc7ccb" Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.939434 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5f6db6dcf4-jklzx"] Nov 26 13:45:36 crc kubenswrapper[4695]: I1126 13:45:36.948080 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5f6db6dcf4-jklzx"] Nov 26 13:45:37 crc kubenswrapper[4695]: I1126 13:45:37.191613 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" path="/var/lib/kubelet/pods/0ec068e9-365b-4a45-a4c3-44c29ad4f494/volumes" Nov 26 13:45:37 crc kubenswrapper[4695]: I1126 13:45:37.896943 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerStarted","Data":"1681ef5a19ea7c52e1715cd0777b4e217d5235dc1c33bd438addfbbb3bf35e59"} Nov 26 13:45:37 crc kubenswrapper[4695]: I1126 13:45:37.897288 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 13:45:37 crc kubenswrapper[4695]: I1126 13:45:37.919085 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.537812634 podStartE2EDuration="6.919068502s" podCreationTimestamp="2025-11-26 13:45:31 +0000 UTC" firstStartedPulling="2025-11-26 13:45:32.998807392 +0000 UTC m=+1316.634632464" lastFinishedPulling="2025-11-26 13:45:37.38006325 +0000 UTC m=+1321.015888332" observedRunningTime="2025-11-26 13:45:37.914864418 +0000 UTC m=+1321.550689500" watchObservedRunningTime="2025-11-26 13:45:37.919068502 +0000 UTC m=+1321.554893574" Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.650492 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.755095 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-4xcpb"] Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.755371 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" podUID="529f964c-d6e6-45c0-8024-14586a1984d3" containerName="dnsmasq-dns" containerID="cri-o://a4963866846b59f88109bca1c7068b75c99db2c3f6265ef905e6c3adf5532026" gracePeriod=10 Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.931738 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.942473 4695 generic.go:334] "Generic (PLEG): container finished" podID="90feb883-0869-4d2d-bce9-1678291bf72f" containerID="41778b070eb0d881071a8787448e9bbbda5f5c824a871136eaa5b506fe14849d" exitCode=0 Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.942570 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95668c77b-9hn77" event={"ID":"90feb883-0869-4d2d-bce9-1678291bf72f","Type":"ContainerDied","Data":"41778b070eb0d881071a8787448e9bbbda5f5c824a871136eaa5b506fe14849d"} Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.960207 4695 generic.go:334] "Generic (PLEG): container finished" podID="529f964c-d6e6-45c0-8024-14586a1984d3" containerID="a4963866846b59f88109bca1c7068b75c99db2c3f6265ef905e6c3adf5532026" exitCode=0 Nov 26 13:45:39 crc kubenswrapper[4695]: I1126 13:45:39.960243 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" event={"ID":"529f964c-d6e6-45c0-8024-14586a1984d3","Type":"ContainerDied","Data":"a4963866846b59f88109bca1c7068b75c99db2c3f6265ef905e6c3adf5532026"} Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.000676 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.264002 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.333927 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-combined-ca-bundle\") pod \"90feb883-0869-4d2d-bce9-1678291bf72f\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.334055 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-httpd-config\") pod \"90feb883-0869-4d2d-bce9-1678291bf72f\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.334154 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-ovndb-tls-certs\") pod \"90feb883-0869-4d2d-bce9-1678291bf72f\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.334176 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rl9c\" (UniqueName: \"kubernetes.io/projected/90feb883-0869-4d2d-bce9-1678291bf72f-kube-api-access-9rl9c\") pod \"90feb883-0869-4d2d-bce9-1678291bf72f\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.334220 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-config\") pod \"90feb883-0869-4d2d-bce9-1678291bf72f\" (UID: \"90feb883-0869-4d2d-bce9-1678291bf72f\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.346530 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "90feb883-0869-4d2d-bce9-1678291bf72f" (UID: "90feb883-0869-4d2d-bce9-1678291bf72f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.351047 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90feb883-0869-4d2d-bce9-1678291bf72f-kube-api-access-9rl9c" (OuterVolumeSpecName: "kube-api-access-9rl9c") pod "90feb883-0869-4d2d-bce9-1678291bf72f" (UID: "90feb883-0869-4d2d-bce9-1678291bf72f"). InnerVolumeSpecName "kube-api-access-9rl9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.353161 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.421533 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90feb883-0869-4d2d-bce9-1678291bf72f" (UID: "90feb883-0869-4d2d-bce9-1678291bf72f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.436901 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-svc\") pod \"529f964c-d6e6-45c0-8024-14586a1984d3\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437089 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gbfq\" (UniqueName: \"kubernetes.io/projected/529f964c-d6e6-45c0-8024-14586a1984d3-kube-api-access-2gbfq\") pod \"529f964c-d6e6-45c0-8024-14586a1984d3\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437115 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-swift-storage-0\") pod \"529f964c-d6e6-45c0-8024-14586a1984d3\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437139 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-sb\") pod \"529f964c-d6e6-45c0-8024-14586a1984d3\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437201 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-config\") pod \"529f964c-d6e6-45c0-8024-14586a1984d3\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437260 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-nb\") pod \"529f964c-d6e6-45c0-8024-14586a1984d3\" (UID: \"529f964c-d6e6-45c0-8024-14586a1984d3\") " Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437615 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rl9c\" (UniqueName: \"kubernetes.io/projected/90feb883-0869-4d2d-bce9-1678291bf72f-kube-api-access-9rl9c\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437631 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.437642 4695 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.450488 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/529f964c-d6e6-45c0-8024-14586a1984d3-kube-api-access-2gbfq" (OuterVolumeSpecName: "kube-api-access-2gbfq") pod "529f964c-d6e6-45c0-8024-14586a1984d3" (UID: "529f964c-d6e6-45c0-8024-14586a1984d3"). InnerVolumeSpecName "kube-api-access-2gbfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.464439 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-config" (OuterVolumeSpecName: "config") pod "90feb883-0869-4d2d-bce9-1678291bf72f" (UID: "90feb883-0869-4d2d-bce9-1678291bf72f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.515447 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "90feb883-0869-4d2d-bce9-1678291bf72f" (UID: "90feb883-0869-4d2d-bce9-1678291bf72f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.527964 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "529f964c-d6e6-45c0-8024-14586a1984d3" (UID: "529f964c-d6e6-45c0-8024-14586a1984d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.537783 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "529f964c-d6e6-45c0-8024-14586a1984d3" (UID: "529f964c-d6e6-45c0-8024-14586a1984d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.538866 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gbfq\" (UniqueName: \"kubernetes.io/projected/529f964c-d6e6-45c0-8024-14586a1984d3-kube-api-access-2gbfq\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.538891 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.538901 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.538909 4695 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.538919 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/90feb883-0869-4d2d-bce9-1678291bf72f-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.539455 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "529f964c-d6e6-45c0-8024-14586a1984d3" (UID: "529f964c-d6e6-45c0-8024-14586a1984d3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.545175 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-config" (OuterVolumeSpecName: "config") pod "529f964c-d6e6-45c0-8024-14586a1984d3" (UID: "529f964c-d6e6-45c0-8024-14586a1984d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.566545 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "529f964c-d6e6-45c0-8024-14586a1984d3" (UID: "529f964c-d6e6-45c0-8024-14586a1984d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.641067 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.641106 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.641121 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/529f964c-d6e6-45c0-8024-14586a1984d3-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.970005 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" event={"ID":"529f964c-d6e6-45c0-8024-14586a1984d3","Type":"ContainerDied","Data":"1f6c9a037e25347fbfc7b562b0edea66b7fd023e33b67dad11c60aef1297b598"} Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.970024 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-4xcpb" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.970087 4695 scope.go:117] "RemoveContainer" containerID="a4963866846b59f88109bca1c7068b75c99db2c3f6265ef905e6c3adf5532026" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.972948 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="cinder-scheduler" containerID="cri-o://e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1" gracePeriod=30 Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.973405 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-95668c77b-9hn77" Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.973473 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-95668c77b-9hn77" event={"ID":"90feb883-0869-4d2d-bce9-1678291bf72f","Type":"ContainerDied","Data":"f8daf9b3168701814e3418386708739810bdc51b3876636ecf4aecca1d62ca39"} Nov 26 13:45:40 crc kubenswrapper[4695]: I1126 13:45:40.973520 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="probe" containerID="cri-o://5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0" gracePeriod=30 Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.010272 4695 scope.go:117] "RemoveContainer" containerID="19559fcc840d1f923589d27c98cce2c34082bf6cbefc8949390a69bab512bab5" Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.016016 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-4xcpb"] Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.029441 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-4xcpb"] Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.038442 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-95668c77b-9hn77"] Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.040665 4695 scope.go:117] "RemoveContainer" containerID="497203c429007bc3bf7d4001e7c388aca6a02f69c926cc99ce75c8552ba164f2" Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.049181 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-95668c77b-9hn77"] Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.090212 4695 scope.go:117] "RemoveContainer" containerID="41778b070eb0d881071a8787448e9bbbda5f5c824a871136eaa5b506fe14849d" Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.175931 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="529f964c-d6e6-45c0-8024-14586a1984d3" path="/var/lib/kubelet/pods/529f964c-d6e6-45c0-8024-14586a1984d3/volumes" Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.176743 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" path="/var/lib/kubelet/pods/90feb883-0869-4d2d-bce9-1678291bf72f/volumes" Nov 26 13:45:41 crc kubenswrapper[4695]: I1126 13:45:41.967512 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 13:45:42 crc kubenswrapper[4695]: I1126 13:45:42.564321 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 26 13:45:43 crc kubenswrapper[4695]: I1126 13:45:43.002210 4695 generic.go:334] "Generic (PLEG): container finished" podID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerID="5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0" exitCode=0 Nov 26 13:45:43 crc kubenswrapper[4695]: I1126 13:45:43.002255 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"466cbcd6-2cb2-4451-b921-c20d81843e39","Type":"ContainerDied","Data":"5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0"} Nov 26 13:45:43 crc kubenswrapper[4695]: I1126 13:45:43.582900 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.738766 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.835122 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-combined-ca-bundle\") pod \"466cbcd6-2cb2-4451-b921-c20d81843e39\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.835193 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data\") pod \"466cbcd6-2cb2-4451-b921-c20d81843e39\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.835269 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data-custom\") pod \"466cbcd6-2cb2-4451-b921-c20d81843e39\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.835298 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466cbcd6-2cb2-4451-b921-c20d81843e39-etc-machine-id\") pod \"466cbcd6-2cb2-4451-b921-c20d81843e39\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.835400 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-scripts\") pod \"466cbcd6-2cb2-4451-b921-c20d81843e39\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.835444 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxxrk\" (UniqueName: \"kubernetes.io/projected/466cbcd6-2cb2-4451-b921-c20d81843e39-kube-api-access-lxxrk\") pod \"466cbcd6-2cb2-4451-b921-c20d81843e39\" (UID: \"466cbcd6-2cb2-4451-b921-c20d81843e39\") " Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.837443 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/466cbcd6-2cb2-4451-b921-c20d81843e39-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "466cbcd6-2cb2-4451-b921-c20d81843e39" (UID: "466cbcd6-2cb2-4451-b921-c20d81843e39"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.855742 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-scripts" (OuterVolumeSpecName: "scripts") pod "466cbcd6-2cb2-4451-b921-c20d81843e39" (UID: "466cbcd6-2cb2-4451-b921-c20d81843e39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.858773 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "466cbcd6-2cb2-4451-b921-c20d81843e39" (UID: "466cbcd6-2cb2-4451-b921-c20d81843e39"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.876611 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466cbcd6-2cb2-4451-b921-c20d81843e39-kube-api-access-lxxrk" (OuterVolumeSpecName: "kube-api-access-lxxrk") pod "466cbcd6-2cb2-4451-b921-c20d81843e39" (UID: "466cbcd6-2cb2-4451-b921-c20d81843e39"). InnerVolumeSpecName "kube-api-access-lxxrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.893136 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "466cbcd6-2cb2-4451-b921-c20d81843e39" (UID: "466cbcd6-2cb2-4451-b921-c20d81843e39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.937506 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.937541 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxxrk\" (UniqueName: \"kubernetes.io/projected/466cbcd6-2cb2-4451-b921-c20d81843e39-kube-api-access-lxxrk\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.937555 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.937563 4695 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.937573 4695 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/466cbcd6-2cb2-4451-b921-c20d81843e39-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:44 crc kubenswrapper[4695]: I1126 13:45:44.953583 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data" (OuterVolumeSpecName: "config-data") pod "466cbcd6-2cb2-4451-b921-c20d81843e39" (UID: "466cbcd6-2cb2-4451-b921-c20d81843e39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.022110 4695 generic.go:334] "Generic (PLEG): container finished" podID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerID="e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1" exitCode=0 Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.022169 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"466cbcd6-2cb2-4451-b921-c20d81843e39","Type":"ContainerDied","Data":"e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1"} Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.022222 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"466cbcd6-2cb2-4451-b921-c20d81843e39","Type":"ContainerDied","Data":"e77319275efd3fd0f10a3a97c644be64ed84c22a75bdb4171cad4ef85f5dc660"} Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.022241 4695 scope.go:117] "RemoveContainer" containerID="5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.022170 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.038643 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/466cbcd6-2cb2-4451-b921-c20d81843e39-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.056427 4695 scope.go:117] "RemoveContainer" containerID="e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.066433 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.080910 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.081678 4695 scope.go:117] "RemoveContainer" containerID="5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.082247 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0\": container with ID starting with 5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0 not found: ID does not exist" containerID="5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.082285 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0"} err="failed to get container status \"5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0\": rpc error: code = NotFound desc = could not find container \"5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0\": container with ID starting with 5db6afc96a9e7cb66653d1c68edf1b810d4fbec585b3557f4758da8d3a0f30d0 not found: ID does not exist" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.082342 4695 scope.go:117] "RemoveContainer" containerID="e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.082774 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1\": container with ID starting with e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1 not found: ID does not exist" containerID="e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.082832 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1"} err="failed to get container status \"e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1\": rpc error: code = NotFound desc = could not find container \"e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1\": container with ID starting with e63486d7eda6fd678008a5bb0ef40fa83e2c120d73640469033eda126ea09ff1 not found: ID does not exist" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.093865 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094400 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api-log" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094427 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api-log" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094441 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529f964c-d6e6-45c0-8024-14586a1984d3" containerName="init" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094451 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="529f964c-d6e6-45c0-8024-14586a1984d3" containerName="init" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094469 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="probe" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094477 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="probe" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094491 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-api" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094500 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-api" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094518 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-httpd" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094529 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-httpd" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094551 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="cinder-scheduler" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094559 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="cinder-scheduler" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094572 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094579 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api" Nov 26 13:45:45 crc kubenswrapper[4695]: E1126 13:45:45.094644 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="529f964c-d6e6-45c0-8024-14586a1984d3" containerName="dnsmasq-dns" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094655 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="529f964c-d6e6-45c0-8024-14586a1984d3" containerName="dnsmasq-dns" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094886 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="529f964c-d6e6-45c0-8024-14586a1984d3" containerName="dnsmasq-dns" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094920 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-api" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094940 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094958 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="probe" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094975 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" containerName="cinder-scheduler" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.094995 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="90feb883-0869-4d2d-bce9-1678291bf72f" containerName="neutron-httpd" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.095008 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ec068e9-365b-4a45-a4c3-44c29ad4f494" containerName="barbican-api-log" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.096286 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.099100 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.102524 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.140629 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-config-data\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.140719 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65648e25-0d32-4537-9e31-e9ca87f02aea-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.140778 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-scripts\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.140805 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.140894 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.141058 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b7n9\" (UniqueName: \"kubernetes.io/projected/65648e25-0d32-4537-9e31-e9ca87f02aea-kube-api-access-5b7n9\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.173173 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466cbcd6-2cb2-4451-b921-c20d81843e39" path="/var/lib/kubelet/pods/466cbcd6-2cb2-4451-b921-c20d81843e39/volumes" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.242289 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65648e25-0d32-4537-9e31-e9ca87f02aea-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.242454 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/65648e25-0d32-4537-9e31-e9ca87f02aea-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.243143 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-scripts\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.243177 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.243205 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.243309 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5b7n9\" (UniqueName: \"kubernetes.io/projected/65648e25-0d32-4537-9e31-e9ca87f02aea-kube-api-access-5b7n9\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.243461 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-config-data\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.247022 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.247284 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-config-data\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.247400 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.248753 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/65648e25-0d32-4537-9e31-e9ca87f02aea-scripts\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.263049 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b7n9\" (UniqueName: \"kubernetes.io/projected/65648e25-0d32-4537-9e31-e9ca87f02aea-kube-api-access-5b7n9\") pod \"cinder-scheduler-0\" (UID: \"65648e25-0d32-4537-9e31-e9ca87f02aea\") " pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.428214 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.711043 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7d4c9c9dbd-9bbnw" Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.799171 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b4894b95b-8zpbh"] Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.799496 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b4894b95b-8zpbh" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon-log" containerID="cri-o://208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46" gracePeriod=30 Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.799671 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-b4894b95b-8zpbh" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" containerID="cri-o://7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33" gracePeriod=30 Nov 26 13:45:45 crc kubenswrapper[4695]: I1126 13:45:45.939249 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 26 13:45:46 crc kubenswrapper[4695]: I1126 13:45:46.043055 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65648e25-0d32-4537-9e31-e9ca87f02aea","Type":"ContainerStarted","Data":"f30e5b374121c032109de327c822a3a31caaad8cbbf85e1fd5a41c8ca9d98041"} Nov 26 13:45:47 crc kubenswrapper[4695]: I1126 13:45:47.064508 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65648e25-0d32-4537-9e31-e9ca87f02aea","Type":"ContainerStarted","Data":"4bdeaf00bcb343f520de881868e0f1c55262bafeeb592314564091d3f4b51672"} Nov 26 13:45:47 crc kubenswrapper[4695]: I1126 13:45:47.753664 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:47 crc kubenswrapper[4695]: I1126 13:45:47.869722 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-776844bc66-7hpvs" Nov 26 13:45:48 crc kubenswrapper[4695]: I1126 13:45:48.115595 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"65648e25-0d32-4537-9e31-e9ca87f02aea","Type":"ContainerStarted","Data":"b2360c9f426f68dbb6146a306d93ad00e041cf4415acfb2d01975e7c6a05e102"} Nov 26 13:45:48 crc kubenswrapper[4695]: I1126 13:45:48.325969 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-666cd5b87b-cmnl9" Nov 26 13:45:48 crc kubenswrapper[4695]: I1126 13:45:48.354195 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.354165643 podStartE2EDuration="3.354165643s" podCreationTimestamp="2025-11-26 13:45:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:48.152145474 +0000 UTC m=+1331.787970556" watchObservedRunningTime="2025-11-26 13:45:48.354165643 +0000 UTC m=+1331.989990735" Nov 26 13:45:49 crc kubenswrapper[4695]: I1126 13:45:49.126119 4695 generic.go:334] "Generic (PLEG): container finished" podID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerID="7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33" exitCode=0 Nov 26 13:45:49 crc kubenswrapper[4695]: I1126 13:45:49.126540 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4894b95b-8zpbh" event={"ID":"fda8b0d7-85f5-4274-a12e-a09982b9fe3c","Type":"ContainerDied","Data":"7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33"} Nov 26 13:45:50 crc kubenswrapper[4695]: I1126 13:45:50.429149 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 26 13:45:51 crc kubenswrapper[4695]: I1126 13:45:51.779679 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b4894b95b-8zpbh" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.039973 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.040240 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-central-agent" containerID="cri-o://80244729973ae578a564421094b3dd96aba96fba15fc8986160e2d25489be772" gracePeriod=30 Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.040362 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="proxy-httpd" containerID="cri-o://1681ef5a19ea7c52e1715cd0777b4e217d5235dc1c33bd438addfbbb3bf35e59" gracePeriod=30 Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.040415 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="sg-core" containerID="cri-o://52d1dde3112b85dbc7cb6d291777c4fea589593a5e1794b5971d0dcc2c126161" gracePeriod=30 Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.040456 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-notification-agent" containerID="cri-o://7c91559373ee4fcaeebba3ccde2ee95aadcbe054c4fab0517508475ab20d1f17" gracePeriod=30 Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.059948 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.165:3000/\": EOF" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.642550 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7d6d6689f5-n925b"] Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.644521 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.648875 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.649191 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.649298 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.655840 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d6d6689f5-n925b"] Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691258 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-public-tls-certs\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691300 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-config-data\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691343 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0cb53a75-c198-433b-b342-7acf8ed7dc0c-etc-swift\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691412 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-combined-ca-bundle\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691437 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-internal-tls-certs\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691531 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cb53a75-c198-433b-b342-7acf8ed7dc0c-run-httpd\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691590 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8pc\" (UniqueName: \"kubernetes.io/projected/0cb53a75-c198-433b-b342-7acf8ed7dc0c-kube-api-access-vd8pc\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.691757 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cb53a75-c198-433b-b342-7acf8ed7dc0c-log-httpd\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.793679 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-public-tls-certs\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.794698 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-config-data\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.794813 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0cb53a75-c198-433b-b342-7acf8ed7dc0c-etc-swift\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.794891 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-combined-ca-bundle\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.794977 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-internal-tls-certs\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.795048 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cb53a75-c198-433b-b342-7acf8ed7dc0c-run-httpd\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.795123 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8pc\" (UniqueName: \"kubernetes.io/projected/0cb53a75-c198-433b-b342-7acf8ed7dc0c-kube-api-access-vd8pc\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.795240 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cb53a75-c198-433b-b342-7acf8ed7dc0c-log-httpd\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.795713 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cb53a75-c198-433b-b342-7acf8ed7dc0c-log-httpd\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.795995 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cb53a75-c198-433b-b342-7acf8ed7dc0c-run-httpd\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.800589 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-combined-ca-bundle\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.802240 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-internal-tls-certs\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.802441 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-public-tls-certs\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.804232 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0cb53a75-c198-433b-b342-7acf8ed7dc0c-etc-swift\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.815314 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cb53a75-c198-433b-b342-7acf8ed7dc0c-config-data\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.817164 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8pc\" (UniqueName: \"kubernetes.io/projected/0cb53a75-c198-433b-b342-7acf8ed7dc0c-kube-api-access-vd8pc\") pod \"swift-proxy-7d6d6689f5-n925b\" (UID: \"0cb53a75-c198-433b-b342-7acf8ed7dc0c\") " pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.850121 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.852427 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.855424 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.855483 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.855593 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-trqhl" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.857659 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.898588 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-openstack-config\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.898714 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.898967 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57lt\" (UniqueName: \"kubernetes.io/projected/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-kube-api-access-n57lt\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.899217 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:52 crc kubenswrapper[4695]: I1126 13:45:52.966329 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.000804 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-openstack-config\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.000873 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.000963 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n57lt\" (UniqueName: \"kubernetes.io/projected/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-kube-api-access-n57lt\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.001049 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.002320 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-openstack-config\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.007801 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-openstack-config-secret\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.007842 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-combined-ca-bundle\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.020332 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n57lt\" (UniqueName: \"kubernetes.io/projected/d282a7dc-4e06-4e82-8b99-ce6f8416c5cc-kube-api-access-n57lt\") pod \"openstackclient\" (UID: \"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc\") " pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.186162 4695 generic.go:334] "Generic (PLEG): container finished" podID="795192f7-3108-457f-b912-bd47f356881b" containerID="1681ef5a19ea7c52e1715cd0777b4e217d5235dc1c33bd438addfbbb3bf35e59" exitCode=0 Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.186631 4695 generic.go:334] "Generic (PLEG): container finished" podID="795192f7-3108-457f-b912-bd47f356881b" containerID="52d1dde3112b85dbc7cb6d291777c4fea589593a5e1794b5971d0dcc2c126161" exitCode=2 Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.186683 4695 generic.go:334] "Generic (PLEG): container finished" podID="795192f7-3108-457f-b912-bd47f356881b" containerID="80244729973ae578a564421094b3dd96aba96fba15fc8986160e2d25489be772" exitCode=0 Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.186437 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerDied","Data":"1681ef5a19ea7c52e1715cd0777b4e217d5235dc1c33bd438addfbbb3bf35e59"} Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.186720 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerDied","Data":"52d1dde3112b85dbc7cb6d291777c4fea589593a5e1794b5971d0dcc2c126161"} Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.186761 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerDied","Data":"80244729973ae578a564421094b3dd96aba96fba15fc8986160e2d25489be772"} Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.204668 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.516715 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7d6d6689f5-n925b"] Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.732616 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 26 13:45:53 crc kubenswrapper[4695]: I1126 13:45:53.744375 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:45:54 crc kubenswrapper[4695]: I1126 13:45:54.198917 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc","Type":"ContainerStarted","Data":"c310d209d410ea2817fae8b37e9e0a63b69fc160fe1e161c0391554ccb9eaf44"} Nov 26 13:45:54 crc kubenswrapper[4695]: I1126 13:45:54.204751 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d6d6689f5-n925b" event={"ID":"0cb53a75-c198-433b-b342-7acf8ed7dc0c","Type":"ContainerStarted","Data":"552eb2eedfb61fc553aa26c14510af3a2b6fb0ae9e59b6237f05b5e766e67512"} Nov 26 13:45:54 crc kubenswrapper[4695]: I1126 13:45:54.204776 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d6d6689f5-n925b" event={"ID":"0cb53a75-c198-433b-b342-7acf8ed7dc0c","Type":"ContainerStarted","Data":"077b964f3f0b95c2f0f91028b6d47744c91c6d6a4e94bf65c9a131bc721d5a46"} Nov 26 13:45:54 crc kubenswrapper[4695]: I1126 13:45:54.204787 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7d6d6689f5-n925b" event={"ID":"0cb53a75-c198-433b-b342-7acf8ed7dc0c","Type":"ContainerStarted","Data":"cc84090c388ccc1400288b7ff1027a7588fceb845645ee09520816e2f804c091"} Nov 26 13:45:54 crc kubenswrapper[4695]: I1126 13:45:54.205872 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:54 crc kubenswrapper[4695]: I1126 13:45:54.205900 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:45:54 crc kubenswrapper[4695]: I1126 13:45:54.230305 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7d6d6689f5-n925b" podStartSLOduration=2.230289785 podStartE2EDuration="2.230289785s" podCreationTimestamp="2025-11-26 13:45:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:45:54.225206463 +0000 UTC m=+1337.861031545" watchObservedRunningTime="2025-11-26 13:45:54.230289785 +0000 UTC m=+1337.866114867" Nov 26 13:45:55 crc kubenswrapper[4695]: I1126 13:45:55.721782 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.243792 4695 generic.go:334] "Generic (PLEG): container finished" podID="795192f7-3108-457f-b912-bd47f356881b" containerID="7c91559373ee4fcaeebba3ccde2ee95aadcbe054c4fab0517508475ab20d1f17" exitCode=0 Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.244127 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerDied","Data":"7c91559373ee4fcaeebba3ccde2ee95aadcbe054c4fab0517508475ab20d1f17"} Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.355321 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.396327 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pc4x\" (UniqueName: \"kubernetes.io/projected/795192f7-3108-457f-b912-bd47f356881b-kube-api-access-9pc4x\") pod \"795192f7-3108-457f-b912-bd47f356881b\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.396384 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-scripts\") pod \"795192f7-3108-457f-b912-bd47f356881b\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.396428 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-log-httpd\") pod \"795192f7-3108-457f-b912-bd47f356881b\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.396522 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-config-data\") pod \"795192f7-3108-457f-b912-bd47f356881b\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.396542 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-run-httpd\") pod \"795192f7-3108-457f-b912-bd47f356881b\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.396579 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-combined-ca-bundle\") pod \"795192f7-3108-457f-b912-bd47f356881b\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.396647 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-sg-core-conf-yaml\") pod \"795192f7-3108-457f-b912-bd47f356881b\" (UID: \"795192f7-3108-457f-b912-bd47f356881b\") " Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.401251 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "795192f7-3108-457f-b912-bd47f356881b" (UID: "795192f7-3108-457f-b912-bd47f356881b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.404173 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "795192f7-3108-457f-b912-bd47f356881b" (UID: "795192f7-3108-457f-b912-bd47f356881b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.406365 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795192f7-3108-457f-b912-bd47f356881b-kube-api-access-9pc4x" (OuterVolumeSpecName: "kube-api-access-9pc4x") pod "795192f7-3108-457f-b912-bd47f356881b" (UID: "795192f7-3108-457f-b912-bd47f356881b"). InnerVolumeSpecName "kube-api-access-9pc4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.415494 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-scripts" (OuterVolumeSpecName: "scripts") pod "795192f7-3108-457f-b912-bd47f356881b" (UID: "795192f7-3108-457f-b912-bd47f356881b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.440520 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "795192f7-3108-457f-b912-bd47f356881b" (UID: "795192f7-3108-457f-b912-bd47f356881b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.496989 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "795192f7-3108-457f-b912-bd47f356881b" (UID: "795192f7-3108-457f-b912-bd47f356881b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.498234 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pc4x\" (UniqueName: \"kubernetes.io/projected/795192f7-3108-457f-b912-bd47f356881b-kube-api-access-9pc4x\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.498251 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.498261 4695 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.498270 4695 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/795192f7-3108-457f-b912-bd47f356881b-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.498281 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.498290 4695 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.503214 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-config-data" (OuterVolumeSpecName: "config-data") pod "795192f7-3108-457f-b912-bd47f356881b" (UID: "795192f7-3108-457f-b912-bd47f356881b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:45:57 crc kubenswrapper[4695]: I1126 13:45:57.599525 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/795192f7-3108-457f-b912-bd47f356881b-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.263407 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"795192f7-3108-457f-b912-bd47f356881b","Type":"ContainerDied","Data":"58acf2c9bf59cfcc675fe457ecbcad2bb64bbf575a59d0f4c43a2c908f05a188"} Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.263461 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.263466 4695 scope.go:117] "RemoveContainer" containerID="1681ef5a19ea7c52e1715cd0777b4e217d5235dc1c33bd438addfbbb3bf35e59" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.304647 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.327290 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335255 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:58 crc kubenswrapper[4695]: E1126 13:45:58.335630 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="proxy-httpd" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335645 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="proxy-httpd" Nov 26 13:45:58 crc kubenswrapper[4695]: E1126 13:45:58.335661 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="sg-core" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335669 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="sg-core" Nov 26 13:45:58 crc kubenswrapper[4695]: E1126 13:45:58.335697 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-central-agent" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335703 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-central-agent" Nov 26 13:45:58 crc kubenswrapper[4695]: E1126 13:45:58.335719 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-notification-agent" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335725 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-notification-agent" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335910 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-notification-agent" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335927 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="sg-core" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335942 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="ceilometer-central-agent" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.335950 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="795192f7-3108-457f-b912-bd47f356881b" containerName="proxy-httpd" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.337546 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.341646 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.341835 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.351852 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.413993 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-scripts\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.414037 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-config-data\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.414082 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.414098 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.414551 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-run-httpd\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.414627 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8wck\" (UniqueName: \"kubernetes.io/projected/878548eb-7f44-4415-b9c4-f9b0ad477f10-kube-api-access-k8wck\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.414648 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-log-httpd\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.516648 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-scripts\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.516720 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-config-data\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.516782 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.516818 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.516861 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-run-httpd\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.516937 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8wck\" (UniqueName: \"kubernetes.io/projected/878548eb-7f44-4415-b9c4-f9b0ad477f10-kube-api-access-k8wck\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.516964 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-log-httpd\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.517693 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-log-httpd\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.518457 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-run-httpd\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.523192 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-config-data\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.523204 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.526521 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-scripts\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.532021 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.540239 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8wck\" (UniqueName: \"kubernetes.io/projected/878548eb-7f44-4415-b9c4-f9b0ad477f10-kube-api-access-k8wck\") pod \"ceilometer-0\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " pod="openstack/ceilometer-0" Nov 26 13:45:58 crc kubenswrapper[4695]: I1126 13:45:58.690932 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:45:59 crc kubenswrapper[4695]: I1126 13:45:59.179773 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="795192f7-3108-457f-b912-bd47f356881b" path="/var/lib/kubelet/pods/795192f7-3108-457f-b912-bd47f356881b/volumes" Nov 26 13:46:01 crc kubenswrapper[4695]: I1126 13:46:01.779754 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b4894b95b-8zpbh" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 26 13:46:02 crc kubenswrapper[4695]: E1126 13:46:02.936258 4695 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: , extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_795192f7-3108-457f-b912-bd47f356881b/proxy-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_795192f7-3108-457f-b912-bd47f356881b/proxy-httpd/0.log: no such file or directory Nov 26 13:46:02 crc kubenswrapper[4695]: I1126 13:46:02.979683 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:46:02 crc kubenswrapper[4695]: I1126 13:46:02.981284 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7d6d6689f5-n925b" Nov 26 13:46:03 crc kubenswrapper[4695]: E1126 13:46:03.196281 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-58acf2c9bf59cfcc675fe457ecbcad2bb64bbf575a59d0f4c43a2c908f05a188\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-conmon-80244729973ae578a564421094b3dd96aba96fba15fc8986160e2d25489be772.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda8b0d7_85f5_4274_a12e_a09982b9fe3c.slice/crio-7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-52d1dde3112b85dbc7cb6d291777c4fea589593a5e1794b5971d0dcc2c126161.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-conmon-7c91559373ee4fcaeebba3ccde2ee95aadcbe054c4fab0517508475ab20d1f17.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfda8b0d7_85f5_4274_a12e_a09982b9fe3c.slice/crio-conmon-7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7f6344e_8260_49b1_af60_a37747486529.slice/crio-0aee747e74c12cdfc35dbc8f4f260250b54689d530cf1dd81bc49f1f55f3f381.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-conmon-1681ef5a19ea7c52e1715cd0777b4e217d5235dc1c33bd438addfbbb3bf35e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-1681ef5a19ea7c52e1715cd0777b4e217d5235dc1c33bd438addfbbb3bf35e59.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-conmon-52d1dde3112b85dbc7cb6d291777c4fea589593a5e1794b5971d0dcc2c126161.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod795192f7_3108_457f_b912_bd47f356881b.slice/crio-80244729973ae578a564421094b3dd96aba96fba15fc8986160e2d25489be772.scope\": RecentStats: unable to find data in memory cache]" Nov 26 13:46:03 crc kubenswrapper[4695]: I1126 13:46:03.320570 4695 generic.go:334] "Generic (PLEG): container finished" podID="d7f6344e-8260-49b1-af60-a37747486529" containerID="0aee747e74c12cdfc35dbc8f4f260250b54689d530cf1dd81bc49f1f55f3f381" exitCode=137 Nov 26 13:46:03 crc kubenswrapper[4695]: I1126 13:46:03.320643 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7f6344e-8260-49b1-af60-a37747486529","Type":"ContainerDied","Data":"0aee747e74c12cdfc35dbc8f4f260250b54689d530cf1dd81bc49f1f55f3f381"} Nov 26 13:46:03 crc kubenswrapper[4695]: I1126 13:46:03.989412 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.026742 4695 scope.go:117] "RemoveContainer" containerID="52d1dde3112b85dbc7cb6d291777c4fea589593a5e1794b5971d0dcc2c126161" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.079606 4695 scope.go:117] "RemoveContainer" containerID="7c91559373ee4fcaeebba3ccde2ee95aadcbe054c4fab0517508475ab20d1f17" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.138454 4695 scope.go:117] "RemoveContainer" containerID="80244729973ae578a564421094b3dd96aba96fba15fc8986160e2d25489be772" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.390341 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"d7f6344e-8260-49b1-af60-a37747486529","Type":"ContainerDied","Data":"c7efac4415fbe46ea101c66174b2b42ad27a70435e2f0ad319ebc02a43caf652"} Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.390592 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7efac4415fbe46ea101c66174b2b42ad27a70435e2f0ad319ebc02a43caf652" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.413514 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.583894 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f6344e-8260-49b1-af60-a37747486529-logs\") pod \"d7f6344e-8260-49b1-af60-a37747486529\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.587160 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7f6344e-8260-49b1-af60-a37747486529-logs" (OuterVolumeSpecName: "logs") pod "d7f6344e-8260-49b1-af60-a37747486529" (UID: "d7f6344e-8260-49b1-af60-a37747486529"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.587534 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw9t9\" (UniqueName: \"kubernetes.io/projected/d7f6344e-8260-49b1-af60-a37747486529-kube-api-access-bw9t9\") pod \"d7f6344e-8260-49b1-af60-a37747486529\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.587619 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data-custom\") pod \"d7f6344e-8260-49b1-af60-a37747486529\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.587651 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data\") pod \"d7f6344e-8260-49b1-af60-a37747486529\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.587694 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-combined-ca-bundle\") pod \"d7f6344e-8260-49b1-af60-a37747486529\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.587786 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-scripts\") pod \"d7f6344e-8260-49b1-af60-a37747486529\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.587841 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7f6344e-8260-49b1-af60-a37747486529-etc-machine-id\") pod \"d7f6344e-8260-49b1-af60-a37747486529\" (UID: \"d7f6344e-8260-49b1-af60-a37747486529\") " Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.588363 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7f6344e-8260-49b1-af60-a37747486529-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d7f6344e-8260-49b1-af60-a37747486529" (UID: "d7f6344e-8260-49b1-af60-a37747486529"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.588440 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7f6344e-8260-49b1-af60-a37747486529-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.600510 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d7f6344e-8260-49b1-af60-a37747486529" (UID: "d7f6344e-8260-49b1-af60-a37747486529"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.600774 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7f6344e-8260-49b1-af60-a37747486529-kube-api-access-bw9t9" (OuterVolumeSpecName: "kube-api-access-bw9t9") pod "d7f6344e-8260-49b1-af60-a37747486529" (UID: "d7f6344e-8260-49b1-af60-a37747486529"). InnerVolumeSpecName "kube-api-access-bw9t9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.608491 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-scripts" (OuterVolumeSpecName: "scripts") pod "d7f6344e-8260-49b1-af60-a37747486529" (UID: "d7f6344e-8260-49b1-af60-a37747486529"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.627432 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.668875 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7f6344e-8260-49b1-af60-a37747486529" (UID: "d7f6344e-8260-49b1-af60-a37747486529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.669447 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data" (OuterVolumeSpecName: "config-data") pod "d7f6344e-8260-49b1-af60-a37747486529" (UID: "d7f6344e-8260-49b1-af60-a37747486529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.692137 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw9t9\" (UniqueName: \"kubernetes.io/projected/d7f6344e-8260-49b1-af60-a37747486529-kube-api-access-bw9t9\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.692168 4695 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.692179 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.692189 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.692197 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7f6344e-8260-49b1-af60-a37747486529-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:04 crc kubenswrapper[4695]: I1126 13:46:04.692206 4695 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d7f6344e-8260-49b1-af60-a37747486529-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.403525 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerStarted","Data":"70ef699424d08bc61b72a70196ca7744a925cd60bfb6b522bc4708cd18ee7ae7"} Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.403877 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerStarted","Data":"c9a4e9caa4fd481920235a9bfcedf7858d6fee22506dee01db4f9d1b3a96e27f"} Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.406921 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"d282a7dc-4e06-4e82-8b99-ce6f8416c5cc","Type":"ContainerStarted","Data":"b1f4fd21d24d7906397c9a475bae4f1c36d1a16e7665e7910e2a08f251a04e7a"} Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.406950 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.429059 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.065888481 podStartE2EDuration="13.429038752s" podCreationTimestamp="2025-11-26 13:45:52 +0000 UTC" firstStartedPulling="2025-11-26 13:45:53.744119001 +0000 UTC m=+1337.379944093" lastFinishedPulling="2025-11-26 13:46:04.107269282 +0000 UTC m=+1347.743094364" observedRunningTime="2025-11-26 13:46:05.424193457 +0000 UTC m=+1349.060018539" watchObservedRunningTime="2025-11-26 13:46:05.429038752 +0000 UTC m=+1349.064863854" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.451898 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.465246 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.482662 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:46:05 crc kubenswrapper[4695]: E1126 13:46:05.483088 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api-log" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.483106 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api-log" Nov 26 13:46:05 crc kubenswrapper[4695]: E1126 13:46:05.483120 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.483126 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.483336 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api-log" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.483379 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7f6344e-8260-49b1-af60-a37747486529" containerName="cinder-api" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.484631 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.490311 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.490709 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.491152 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.497254 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.611428 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.611539 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-config-data-custom\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.611810 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.612000 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98045621-506a-4a2b-a135-ed37abdf8de5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.612192 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-config-data\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.612232 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98045621-506a-4a2b-a135-ed37abdf8de5-logs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.612264 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.612302 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-scripts\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.612515 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sff5g\" (UniqueName: \"kubernetes.io/projected/98045621-506a-4a2b-a135-ed37abdf8de5-kube-api-access-sff5g\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715180 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715252 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-config-data-custom\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715323 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715383 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98045621-506a-4a2b-a135-ed37abdf8de5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715475 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-config-data\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715505 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98045621-506a-4a2b-a135-ed37abdf8de5-logs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715531 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715562 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-scripts\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.715629 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sff5g\" (UniqueName: \"kubernetes.io/projected/98045621-506a-4a2b-a135-ed37abdf8de5-kube-api-access-sff5g\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.716381 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/98045621-506a-4a2b-a135-ed37abdf8de5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.716745 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98045621-506a-4a2b-a135-ed37abdf8de5-logs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.720503 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.721282 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-config-data\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.724998 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-config-data-custom\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.725898 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.726492 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.733968 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98045621-506a-4a2b-a135-ed37abdf8de5-scripts\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.737806 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sff5g\" (UniqueName: \"kubernetes.io/projected/98045621-506a-4a2b-a135-ed37abdf8de5-kube-api-access-sff5g\") pod \"cinder-api-0\" (UID: \"98045621-506a-4a2b-a135-ed37abdf8de5\") " pod="openstack/cinder-api-0" Nov 26 13:46:05 crc kubenswrapper[4695]: I1126 13:46:05.807827 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 26 13:46:06 crc kubenswrapper[4695]: I1126 13:46:06.284209 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 26 13:46:06 crc kubenswrapper[4695]: W1126 13:46:06.289790 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98045621_506a_4a2b_a135_ed37abdf8de5.slice/crio-b88868785af3be2c954efe6b1cbce2529b84edb087c018ca333841044e5e0c66 WatchSource:0}: Error finding container b88868785af3be2c954efe6b1cbce2529b84edb087c018ca333841044e5e0c66: Status 404 returned error can't find the container with id b88868785af3be2c954efe6b1cbce2529b84edb087c018ca333841044e5e0c66 Nov 26 13:46:06 crc kubenswrapper[4695]: I1126 13:46:06.396324 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:46:06 crc kubenswrapper[4695]: I1126 13:46:06.396723 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:46:06 crc kubenswrapper[4695]: I1126 13:46:06.421482 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerStarted","Data":"e76c9f9b0a53caf5fcfd1eb02b481c537c330e5dd83aa104340e2b50160f9127"} Nov 26 13:46:06 crc kubenswrapper[4695]: I1126 13:46:06.423781 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"98045621-506a-4a2b-a135-ed37abdf8de5","Type":"ContainerStarted","Data":"b88868785af3be2c954efe6b1cbce2529b84edb087c018ca333841044e5e0c66"} Nov 26 13:46:07 crc kubenswrapper[4695]: I1126 13:46:07.173464 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7f6344e-8260-49b1-af60-a37747486529" path="/var/lib/kubelet/pods/d7f6344e-8260-49b1-af60-a37747486529/volumes" Nov 26 13:46:07 crc kubenswrapper[4695]: I1126 13:46:07.439092 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"98045621-506a-4a2b-a135-ed37abdf8de5","Type":"ContainerStarted","Data":"026797fbd7788d901004c3348a9e5c4bdf9291dd5b4207ba30d0df6faae9a76f"} Nov 26 13:46:09 crc kubenswrapper[4695]: I1126 13:46:09.459372 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerStarted","Data":"4bd6407e6df45758497170e6b744e68f6892abfff6efc59d4a16bc5e270f78cd"} Nov 26 13:46:09 crc kubenswrapper[4695]: I1126 13:46:09.461140 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"98045621-506a-4a2b-a135-ed37abdf8de5","Type":"ContainerStarted","Data":"76198fe358e8a68e6445e9b65134142650891eef15617bde51faa0ad45eb545f"} Nov 26 13:46:09 crc kubenswrapper[4695]: I1126 13:46:09.461336 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 26 13:46:09 crc kubenswrapper[4695]: I1126 13:46:09.491520 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.491504098 podStartE2EDuration="4.491504098s" podCreationTimestamp="2025-11-26 13:46:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:46:09.489166362 +0000 UTC m=+1353.124991444" watchObservedRunningTime="2025-11-26 13:46:09.491504098 +0000 UTC m=+1353.127329180" Nov 26 13:46:10 crc kubenswrapper[4695]: I1126 13:46:10.471768 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerStarted","Data":"c0c6881cb05afeadd7d3ec748c495a6415e8eca056c851a56aac6c1274b47272"} Nov 26 13:46:10 crc kubenswrapper[4695]: I1126 13:46:10.471949 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-central-agent" containerID="cri-o://70ef699424d08bc61b72a70196ca7744a925cd60bfb6b522bc4708cd18ee7ae7" gracePeriod=30 Nov 26 13:46:10 crc kubenswrapper[4695]: I1126 13:46:10.472146 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="proxy-httpd" containerID="cri-o://c0c6881cb05afeadd7d3ec748c495a6415e8eca056c851a56aac6c1274b47272" gracePeriod=30 Nov 26 13:46:10 crc kubenswrapper[4695]: I1126 13:46:10.472213 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-notification-agent" containerID="cri-o://e76c9f9b0a53caf5fcfd1eb02b481c537c330e5dd83aa104340e2b50160f9127" gracePeriod=30 Nov 26 13:46:10 crc kubenswrapper[4695]: I1126 13:46:10.472222 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="sg-core" containerID="cri-o://4bd6407e6df45758497170e6b744e68f6892abfff6efc59d4a16bc5e270f78cd" gracePeriod=30 Nov 26 13:46:10 crc kubenswrapper[4695]: I1126 13:46:10.498647 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.562374516 podStartE2EDuration="12.498631877s" podCreationTimestamp="2025-11-26 13:45:58 +0000 UTC" firstStartedPulling="2025-11-26 13:46:04.633793117 +0000 UTC m=+1348.269618199" lastFinishedPulling="2025-11-26 13:46:09.570050478 +0000 UTC m=+1353.205875560" observedRunningTime="2025-11-26 13:46:10.494569967 +0000 UTC m=+1354.130395049" watchObservedRunningTime="2025-11-26 13:46:10.498631877 +0000 UTC m=+1354.134456959" Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.512800 4695 generic.go:334] "Generic (PLEG): container finished" podID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerID="c0c6881cb05afeadd7d3ec748c495a6415e8eca056c851a56aac6c1274b47272" exitCode=0 Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.513253 4695 generic.go:334] "Generic (PLEG): container finished" podID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerID="4bd6407e6df45758497170e6b744e68f6892abfff6efc59d4a16bc5e270f78cd" exitCode=2 Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.513263 4695 generic.go:334] "Generic (PLEG): container finished" podID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerID="e76c9f9b0a53caf5fcfd1eb02b481c537c330e5dd83aa104340e2b50160f9127" exitCode=0 Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.512996 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerDied","Data":"c0c6881cb05afeadd7d3ec748c495a6415e8eca056c851a56aac6c1274b47272"} Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.513299 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerDied","Data":"4bd6407e6df45758497170e6b744e68f6892abfff6efc59d4a16bc5e270f78cd"} Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.513314 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerDied","Data":"e76c9f9b0a53caf5fcfd1eb02b481c537c330e5dd83aa104340e2b50160f9127"} Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.780423 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-b4894b95b-8zpbh" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.143:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.143:8443: connect: connection refused" Nov 26 13:46:11 crc kubenswrapper[4695]: I1126 13:46:11.780579 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.701505 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-q4zxj"] Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.702905 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.718621 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q4zxj"] Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.794896 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-bkzwl"] Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.796149 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.803101 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bkzwl"] Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.835396 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-1741-account-create-update-fx2z9"] Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.836487 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.840156 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.841424 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/9b003921-6331-4569-9771-82f7ccc92f84-kube-api-access-97xsn\") pod \"nova-api-db-create-q4zxj\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.841527 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b003921-6331-4569-9771-82f7ccc92f84-operator-scripts\") pod \"nova-api-db-create-q4zxj\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.844419 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1741-account-create-update-fx2z9"] Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.943144 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkzbr\" (UniqueName: \"kubernetes.io/projected/cb77d838-bf2d-4351-af21-b039e7fa1089-kube-api-access-mkzbr\") pod \"nova-api-1741-account-create-update-fx2z9\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.943191 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-operator-scripts\") pod \"nova-cell0-db-create-bkzwl\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.943226 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/9b003921-6331-4569-9771-82f7ccc92f84-kube-api-access-97xsn\") pod \"nova-api-db-create-q4zxj\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.943408 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqmmf\" (UniqueName: \"kubernetes.io/projected/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-kube-api-access-kqmmf\") pod \"nova-cell0-db-create-bkzwl\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.943603 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b003921-6331-4569-9771-82f7ccc92f84-operator-scripts\") pod \"nova-api-db-create-q4zxj\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.943688 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb77d838-bf2d-4351-af21-b039e7fa1089-operator-scripts\") pod \"nova-api-1741-account-create-update-fx2z9\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.944366 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b003921-6331-4569-9771-82f7ccc92f84-operator-scripts\") pod \"nova-api-db-create-q4zxj\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.962018 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/9b003921-6331-4569-9771-82f7ccc92f84-kube-api-access-97xsn\") pod \"nova-api-db-create-q4zxj\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:12 crc kubenswrapper[4695]: I1126 13:46:12.999136 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-tjhp9"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.000458 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.009607 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tjhp9"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.019243 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.039202 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-1195-account-create-update-cft27"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.040305 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.046057 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.047408 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqmmf\" (UniqueName: \"kubernetes.io/projected/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-kube-api-access-kqmmf\") pod \"nova-cell0-db-create-bkzwl\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.047497 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb77d838-bf2d-4351-af21-b039e7fa1089-operator-scripts\") pod \"nova-api-1741-account-create-update-fx2z9\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.047609 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkzbr\" (UniqueName: \"kubernetes.io/projected/cb77d838-bf2d-4351-af21-b039e7fa1089-kube-api-access-mkzbr\") pod \"nova-api-1741-account-create-update-fx2z9\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.047635 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-operator-scripts\") pod \"nova-cell0-db-create-bkzwl\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.048408 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-operator-scripts\") pod \"nova-cell0-db-create-bkzwl\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.049056 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb77d838-bf2d-4351-af21-b039e7fa1089-operator-scripts\") pod \"nova-api-1741-account-create-update-fx2z9\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.063960 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1195-account-create-update-cft27"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.073437 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkzbr\" (UniqueName: \"kubernetes.io/projected/cb77d838-bf2d-4351-af21-b039e7fa1089-kube-api-access-mkzbr\") pod \"nova-api-1741-account-create-update-fx2z9\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.081468 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqmmf\" (UniqueName: \"kubernetes.io/projected/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-kube-api-access-kqmmf\") pod \"nova-cell0-db-create-bkzwl\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.120762 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.148744 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkmvn\" (UniqueName: \"kubernetes.io/projected/f4fa32a3-4bbb-432d-a986-920762192742-kube-api-access-lkmvn\") pod \"nova-cell1-db-create-tjhp9\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.148805 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-operator-scripts\") pod \"nova-cell0-1195-account-create-update-cft27\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.148854 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8tp\" (UniqueName: \"kubernetes.io/projected/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-kube-api-access-qr8tp\") pod \"nova-cell0-1195-account-create-update-cft27\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.149131 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4fa32a3-4bbb-432d-a986-920762192742-operator-scripts\") pod \"nova-cell1-db-create-tjhp9\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.155544 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.221158 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-5034-account-create-update-pfg6h"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.223003 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.226820 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.241885 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5034-account-create-update-pfg6h"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.250509 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-operator-scripts\") pod \"nova-cell0-1195-account-create-update-cft27\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.250599 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr8tp\" (UniqueName: \"kubernetes.io/projected/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-kube-api-access-qr8tp\") pod \"nova-cell0-1195-account-create-update-cft27\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.250723 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4fa32a3-4bbb-432d-a986-920762192742-operator-scripts\") pod \"nova-cell1-db-create-tjhp9\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.250804 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkmvn\" (UniqueName: \"kubernetes.io/projected/f4fa32a3-4bbb-432d-a986-920762192742-kube-api-access-lkmvn\") pod \"nova-cell1-db-create-tjhp9\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.252162 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-operator-scripts\") pod \"nova-cell0-1195-account-create-update-cft27\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.253164 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4fa32a3-4bbb-432d-a986-920762192742-operator-scripts\") pod \"nova-cell1-db-create-tjhp9\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.270225 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkmvn\" (UniqueName: \"kubernetes.io/projected/f4fa32a3-4bbb-432d-a986-920762192742-kube-api-access-lkmvn\") pod \"nova-cell1-db-create-tjhp9\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.274647 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr8tp\" (UniqueName: \"kubernetes.io/projected/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-kube-api-access-qr8tp\") pod \"nova-cell0-1195-account-create-update-cft27\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.331139 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.354742 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db49627-d2bf-49a8-889c-fa076d76db84-operator-scripts\") pod \"nova-cell1-5034-account-create-update-pfg6h\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.355096 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjdbc\" (UniqueName: \"kubernetes.io/projected/9db49627-d2bf-49a8-889c-fa076d76db84-kube-api-access-sjdbc\") pod \"nova-cell1-5034-account-create-update-pfg6h\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.455752 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.458563 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjdbc\" (UniqueName: \"kubernetes.io/projected/9db49627-d2bf-49a8-889c-fa076d76db84-kube-api-access-sjdbc\") pod \"nova-cell1-5034-account-create-update-pfg6h\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.462325 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db49627-d2bf-49a8-889c-fa076d76db84-operator-scripts\") pod \"nova-cell1-5034-account-create-update-pfg6h\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.463041 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db49627-d2bf-49a8-889c-fa076d76db84-operator-scripts\") pod \"nova-cell1-5034-account-create-update-pfg6h\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.513703 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjdbc\" (UniqueName: \"kubernetes.io/projected/9db49627-d2bf-49a8-889c-fa076d76db84-kube-api-access-sjdbc\") pod \"nova-cell1-5034-account-create-update-pfg6h\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.570619 4695 generic.go:334] "Generic (PLEG): container finished" podID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerID="70ef699424d08bc61b72a70196ca7744a925cd60bfb6b522bc4708cd18ee7ae7" exitCode=0 Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.570861 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerDied","Data":"70ef699424d08bc61b72a70196ca7744a925cd60bfb6b522bc4708cd18ee7ae7"} Nov 26 13:46:13 crc kubenswrapper[4695]: E1126 13:46:13.611661 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod878548eb_7f44_4415_b9c4_f9b0ad477f10.slice/crio-70ef699424d08bc61b72a70196ca7744a925cd60bfb6b522bc4708cd18ee7ae7.scope\": RecentStats: unable to find data in memory cache]" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.619837 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-q4zxj"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.650465 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.758927 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-bkzwl"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.824542 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.841891 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-1741-account-create-update-fx2z9"] Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.972701 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-sg-core-conf-yaml\") pod \"878548eb-7f44-4415-b9c4-f9b0ad477f10\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.972764 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-combined-ca-bundle\") pod \"878548eb-7f44-4415-b9c4-f9b0ad477f10\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.972839 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-scripts\") pod \"878548eb-7f44-4415-b9c4-f9b0ad477f10\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.972861 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-run-httpd\") pod \"878548eb-7f44-4415-b9c4-f9b0ad477f10\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.972881 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-log-httpd\") pod \"878548eb-7f44-4415-b9c4-f9b0ad477f10\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.972937 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-config-data\") pod \"878548eb-7f44-4415-b9c4-f9b0ad477f10\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.972955 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8wck\" (UniqueName: \"kubernetes.io/projected/878548eb-7f44-4415-b9c4-f9b0ad477f10-kube-api-access-k8wck\") pod \"878548eb-7f44-4415-b9c4-f9b0ad477f10\" (UID: \"878548eb-7f44-4415-b9c4-f9b0ad477f10\") " Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.973694 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "878548eb-7f44-4415-b9c4-f9b0ad477f10" (UID: "878548eb-7f44-4415-b9c4-f9b0ad477f10"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.976063 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "878548eb-7f44-4415-b9c4-f9b0ad477f10" (UID: "878548eb-7f44-4415-b9c4-f9b0ad477f10"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:13 crc kubenswrapper[4695]: I1126 13:46:13.987016 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-scripts" (OuterVolumeSpecName: "scripts") pod "878548eb-7f44-4415-b9c4-f9b0ad477f10" (UID: "878548eb-7f44-4415-b9c4-f9b0ad477f10"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.000322 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878548eb-7f44-4415-b9c4-f9b0ad477f10-kube-api-access-k8wck" (OuterVolumeSpecName: "kube-api-access-k8wck") pod "878548eb-7f44-4415-b9c4-f9b0ad477f10" (UID: "878548eb-7f44-4415-b9c4-f9b0ad477f10"). InnerVolumeSpecName "kube-api-access-k8wck". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.020714 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "878548eb-7f44-4415-b9c4-f9b0ad477f10" (UID: "878548eb-7f44-4415-b9c4-f9b0ad477f10"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.023302 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-tjhp9"] Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.037219 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-1195-account-create-update-cft27"] Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.076673 4695 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.076709 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.076719 4695 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.076727 4695 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/878548eb-7f44-4415-b9c4-f9b0ad477f10-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.076736 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8wck\" (UniqueName: \"kubernetes.io/projected/878548eb-7f44-4415-b9c4-f9b0ad477f10-kube-api-access-k8wck\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.112212 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "878548eb-7f44-4415-b9c4-f9b0ad477f10" (UID: "878548eb-7f44-4415-b9c4-f9b0ad477f10"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.156494 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-config-data" (OuterVolumeSpecName: "config-data") pod "878548eb-7f44-4415-b9c4-f9b0ad477f10" (UID: "878548eb-7f44-4415-b9c4-f9b0ad477f10"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.184007 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-5034-account-create-update-pfg6h"] Nov 26 13:46:14 crc kubenswrapper[4695]: W1126 13:46:14.184271 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9db49627_d2bf_49a8_889c_fa076d76db84.slice/crio-f7893a3a9f3e18a7ef7cdb6eecdbbeb0f42e649ca0aafebedd908f27dedab15e WatchSource:0}: Error finding container f7893a3a9f3e18a7ef7cdb6eecdbbeb0f42e649ca0aafebedd908f27dedab15e: Status 404 returned error can't find the container with id f7893a3a9f3e18a7ef7cdb6eecdbbeb0f42e649ca0aafebedd908f27dedab15e Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.185500 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.185521 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878548eb-7f44-4415-b9c4-f9b0ad477f10-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.594042 4695 generic.go:334] "Generic (PLEG): container finished" podID="9b003921-6331-4569-9771-82f7ccc92f84" containerID="c9926f27a71f1f404324a49a20230d8fa967b4dff86b1773449f12e230a469bc" exitCode=0 Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.594202 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q4zxj" event={"ID":"9b003921-6331-4569-9771-82f7ccc92f84","Type":"ContainerDied","Data":"c9926f27a71f1f404324a49a20230d8fa967b4dff86b1773449f12e230a469bc"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.594404 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q4zxj" event={"ID":"9b003921-6331-4569-9771-82f7ccc92f84","Type":"ContainerStarted","Data":"9bf135b2ebad4124f663d0e9705466915b719d8c0356f0788df2a781c500574e"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.596799 4695 generic.go:334] "Generic (PLEG): container finished" podID="cb77d838-bf2d-4351-af21-b039e7fa1089" containerID="6b2185c381c33a81e989ace298ff8b1f51f062a5e9e9f0ffee659f9ea071b8eb" exitCode=0 Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.596876 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1741-account-create-update-fx2z9" event={"ID":"cb77d838-bf2d-4351-af21-b039e7fa1089","Type":"ContainerDied","Data":"6b2185c381c33a81e989ace298ff8b1f51f062a5e9e9f0ffee659f9ea071b8eb"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.596904 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1741-account-create-update-fx2z9" event={"ID":"cb77d838-bf2d-4351-af21-b039e7fa1089","Type":"ContainerStarted","Data":"7b5d46e9d3204d2ecaf7956de67740f5f9ba2c9afa2bf0f7f009330f82ecf035"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.598210 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tjhp9" event={"ID":"f4fa32a3-4bbb-432d-a986-920762192742","Type":"ContainerStarted","Data":"784c610fcb5b574b24a04785c725c1e365a1237c04c180f9e3cc4654d5ca5c2c"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.602489 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1195-account-create-update-cft27" event={"ID":"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56","Type":"ContainerStarted","Data":"ac6b7ea9683b3fe84dcc23a585e6fe76e55b4f41b72c66db6c6a8a75c4b0fff0"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.602548 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1195-account-create-update-cft27" event={"ID":"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56","Type":"ContainerStarted","Data":"c34885c7dec7a3d7a9835283718096da0747dfcfb476bb689c2db162f4435c2b"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.605988 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5034-account-create-update-pfg6h" event={"ID":"9db49627-d2bf-49a8-889c-fa076d76db84","Type":"ContainerStarted","Data":"f7893a3a9f3e18a7ef7cdb6eecdbbeb0f42e649ca0aafebedd908f27dedab15e"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.613650 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"878548eb-7f44-4415-b9c4-f9b0ad477f10","Type":"ContainerDied","Data":"c9a4e9caa4fd481920235a9bfcedf7858d6fee22506dee01db4f9d1b3a96e27f"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.613704 4695 scope.go:117] "RemoveContainer" containerID="c0c6881cb05afeadd7d3ec748c495a6415e8eca056c851a56aac6c1274b47272" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.613841 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.620302 4695 generic.go:334] "Generic (PLEG): container finished" podID="b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476" containerID="c51cbe4d35500cc0e89eff5a8b423328b4429064abf8b76f1d29b2c54500d29f" exitCode=0 Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.620369 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bkzwl" event={"ID":"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476","Type":"ContainerDied","Data":"c51cbe4d35500cc0e89eff5a8b423328b4429064abf8b76f1d29b2c54500d29f"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.620399 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bkzwl" event={"ID":"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476","Type":"ContainerStarted","Data":"f8e57f05badb6dd1077ce8092832cefdc89aa3b9806da4c02277559df1970abe"} Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.659627 4695 scope.go:117] "RemoveContainer" containerID="4bd6407e6df45758497170e6b744e68f6892abfff6efc59d4a16bc5e270f78cd" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.688505 4695 scope.go:117] "RemoveContainer" containerID="e76c9f9b0a53caf5fcfd1eb02b481c537c330e5dd83aa104340e2b50160f9127" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.719885 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-1195-account-create-update-cft27" podStartSLOduration=1.719864588 podStartE2EDuration="1.719864588s" podCreationTimestamp="2025-11-26 13:46:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:46:14.690027415 +0000 UTC m=+1358.325852507" watchObservedRunningTime="2025-11-26 13:46:14.719864588 +0000 UTC m=+1358.355689690" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.731616 4695 scope.go:117] "RemoveContainer" containerID="70ef699424d08bc61b72a70196ca7744a925cd60bfb6b522bc4708cd18ee7ae7" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.757685 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.776759 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792014 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:14 crc kubenswrapper[4695]: E1126 13:46:14.792447 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-central-agent" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792464 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-central-agent" Nov 26 13:46:14 crc kubenswrapper[4695]: E1126 13:46:14.792486 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="sg-core" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792492 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="sg-core" Nov 26 13:46:14 crc kubenswrapper[4695]: E1126 13:46:14.792499 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-notification-agent" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792505 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-notification-agent" Nov 26 13:46:14 crc kubenswrapper[4695]: E1126 13:46:14.792520 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="proxy-httpd" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792526 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="proxy-httpd" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792710 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="proxy-httpd" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792730 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-notification-agent" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792744 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="sg-core" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.792759 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" containerName="ceilometer-central-agent" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.794524 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.796835 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.797122 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.804666 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.904709 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-scripts\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.904801 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bm6j\" (UniqueName: \"kubernetes.io/projected/cc17e7a8-250d-4902-9829-59e4e9d9e258-kube-api-access-9bm6j\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.904864 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-config-data\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.904926 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-log-httpd\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.905001 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.905063 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:14 crc kubenswrapper[4695]: I1126 13:46:14.905104 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-run-httpd\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.007140 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bm6j\" (UniqueName: \"kubernetes.io/projected/cc17e7a8-250d-4902-9829-59e4e9d9e258-kube-api-access-9bm6j\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.007225 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-config-data\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.007279 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-log-httpd\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.007362 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.007415 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.007448 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-run-httpd\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.007486 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-scripts\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.011025 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-run-httpd\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.011307 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-log-httpd\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.013863 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-scripts\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.018032 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.025875 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.026549 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-config-data\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.027287 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bm6j\" (UniqueName: \"kubernetes.io/projected/cc17e7a8-250d-4902-9829-59e4e9d9e258-kube-api-access-9bm6j\") pod \"ceilometer-0\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.120664 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.173299 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878548eb-7f44-4415-b9c4-f9b0ad477f10" path="/var/lib/kubelet/pods/878548eb-7f44-4415-b9c4-f9b0ad477f10/volumes" Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.589536 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.640549 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerStarted","Data":"150055a756d34f9e4baea0a688b32dbe99bc0c000662a12669a45c7e1635e9d8"} Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.642932 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4fa32a3-4bbb-432d-a986-920762192742" containerID="0a529c087cc2accf72e8a5578df62155bd33c7d9f5a1ea00adb4d7a480b5904a" exitCode=0 Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.643006 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tjhp9" event={"ID":"f4fa32a3-4bbb-432d-a986-920762192742","Type":"ContainerDied","Data":"0a529c087cc2accf72e8a5578df62155bd33c7d9f5a1ea00adb4d7a480b5904a"} Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.645756 4695 generic.go:334] "Generic (PLEG): container finished" podID="9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56" containerID="ac6b7ea9683b3fe84dcc23a585e6fe76e55b4f41b72c66db6c6a8a75c4b0fff0" exitCode=0 Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.645803 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1195-account-create-update-cft27" event={"ID":"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56","Type":"ContainerDied","Data":"ac6b7ea9683b3fe84dcc23a585e6fe76e55b4f41b72c66db6c6a8a75c4b0fff0"} Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.652942 4695 generic.go:334] "Generic (PLEG): container finished" podID="9db49627-d2bf-49a8-889c-fa076d76db84" containerID="0e97052315eb2017ae3bff9436b94da22b00134eab97e4b2d88dca6c12e4adde" exitCode=0 Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.653012 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5034-account-create-update-pfg6h" event={"ID":"9db49627-d2bf-49a8-889c-fa076d76db84","Type":"ContainerDied","Data":"0e97052315eb2017ae3bff9436b94da22b00134eab97e4b2d88dca6c12e4adde"} Nov 26 13:46:15 crc kubenswrapper[4695]: I1126 13:46:15.993506 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.130632 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/9b003921-6331-4569-9771-82f7ccc92f84-kube-api-access-97xsn\") pod \"9b003921-6331-4569-9771-82f7ccc92f84\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.130693 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b003921-6331-4569-9771-82f7ccc92f84-operator-scripts\") pod \"9b003921-6331-4569-9771-82f7ccc92f84\" (UID: \"9b003921-6331-4569-9771-82f7ccc92f84\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.131524 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b003921-6331-4569-9771-82f7ccc92f84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9b003921-6331-4569-9771-82f7ccc92f84" (UID: "9b003921-6331-4569-9771-82f7ccc92f84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.136436 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b003921-6331-4569-9771-82f7ccc92f84-kube-api-access-97xsn" (OuterVolumeSpecName: "kube-api-access-97xsn") pod "9b003921-6331-4569-9771-82f7ccc92f84" (UID: "9b003921-6331-4569-9771-82f7ccc92f84"). InnerVolumeSpecName "kube-api-access-97xsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.198090 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.201827 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.238671 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97xsn\" (UniqueName: \"kubernetes.io/projected/9b003921-6331-4569-9771-82f7ccc92f84-kube-api-access-97xsn\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.238714 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9b003921-6331-4569-9771-82f7ccc92f84-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.337727 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.340376 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqmmf\" (UniqueName: \"kubernetes.io/projected/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-kube-api-access-kqmmf\") pod \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.340481 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-operator-scripts\") pod \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\" (UID: \"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.340626 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkzbr\" (UniqueName: \"kubernetes.io/projected/cb77d838-bf2d-4351-af21-b039e7fa1089-kube-api-access-mkzbr\") pod \"cb77d838-bf2d-4351-af21-b039e7fa1089\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.340704 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb77d838-bf2d-4351-af21-b039e7fa1089-operator-scripts\") pod \"cb77d838-bf2d-4351-af21-b039e7fa1089\" (UID: \"cb77d838-bf2d-4351-af21-b039e7fa1089\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.341484 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb77d838-bf2d-4351-af21-b039e7fa1089-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cb77d838-bf2d-4351-af21-b039e7fa1089" (UID: "cb77d838-bf2d-4351-af21-b039e7fa1089"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.342173 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476" (UID: "b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.347206 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb77d838-bf2d-4351-af21-b039e7fa1089-kube-api-access-mkzbr" (OuterVolumeSpecName: "kube-api-access-mkzbr") pod "cb77d838-bf2d-4351-af21-b039e7fa1089" (UID: "cb77d838-bf2d-4351-af21-b039e7fa1089"). InnerVolumeSpecName "kube-api-access-mkzbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.347483 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-kube-api-access-kqmmf" (OuterVolumeSpecName: "kube-api-access-kqmmf") pod "b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476" (UID: "b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476"). InnerVolumeSpecName "kube-api-access-kqmmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.442307 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-logs\") pod \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.442382 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-tls-certs\") pod \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.442401 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-secret-key\") pod \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.442425 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-config-data\") pod \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.442581 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85mhv\" (UniqueName: \"kubernetes.io/projected/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-kube-api-access-85mhv\") pod \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.442609 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-scripts\") pod \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.442636 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-combined-ca-bundle\") pod \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\" (UID: \"fda8b0d7-85f5-4274-a12e-a09982b9fe3c\") " Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.443025 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkzbr\" (UniqueName: \"kubernetes.io/projected/cb77d838-bf2d-4351-af21-b039e7fa1089-kube-api-access-mkzbr\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.443038 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cb77d838-bf2d-4351-af21-b039e7fa1089-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.443048 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqmmf\" (UniqueName: \"kubernetes.io/projected/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-kube-api-access-kqmmf\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.443057 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.446123 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-logs" (OuterVolumeSpecName: "logs") pod "fda8b0d7-85f5-4274-a12e-a09982b9fe3c" (UID: "fda8b0d7-85f5-4274-a12e-a09982b9fe3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.446642 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "fda8b0d7-85f5-4274-a12e-a09982b9fe3c" (UID: "fda8b0d7-85f5-4274-a12e-a09982b9fe3c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.449210 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-kube-api-access-85mhv" (OuterVolumeSpecName: "kube-api-access-85mhv") pod "fda8b0d7-85f5-4274-a12e-a09982b9fe3c" (UID: "fda8b0d7-85f5-4274-a12e-a09982b9fe3c"). InnerVolumeSpecName "kube-api-access-85mhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.481718 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-scripts" (OuterVolumeSpecName: "scripts") pod "fda8b0d7-85f5-4274-a12e-a09982b9fe3c" (UID: "fda8b0d7-85f5-4274-a12e-a09982b9fe3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.486224 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-config-data" (OuterVolumeSpecName: "config-data") pod "fda8b0d7-85f5-4274-a12e-a09982b9fe3c" (UID: "fda8b0d7-85f5-4274-a12e-a09982b9fe3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.511835 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fda8b0d7-85f5-4274-a12e-a09982b9fe3c" (UID: "fda8b0d7-85f5-4274-a12e-a09982b9fe3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.532871 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "fda8b0d7-85f5-4274-a12e-a09982b9fe3c" (UID: "fda8b0d7-85f5-4274-a12e-a09982b9fe3c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.544161 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.544323 4695 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.544408 4695 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.544465 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.544544 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85mhv\" (UniqueName: \"kubernetes.io/projected/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-kube-api-access-85mhv\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.544601 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.544653 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fda8b0d7-85f5-4274-a12e-a09982b9fe3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.664902 4695 generic.go:334] "Generic (PLEG): container finished" podID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerID="208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46" exitCode=137 Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.664989 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4894b95b-8zpbh" event={"ID":"fda8b0d7-85f5-4274-a12e-a09982b9fe3c","Type":"ContainerDied","Data":"208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46"} Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.665017 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b4894b95b-8zpbh" event={"ID":"fda8b0d7-85f5-4274-a12e-a09982b9fe3c","Type":"ContainerDied","Data":"100c628a2463b22ba34178fe8be9552e81b2024ed522772158f73e210c39274f"} Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.665033 4695 scope.go:117] "RemoveContainer" containerID="7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.665166 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b4894b95b-8zpbh" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.673120 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-bkzwl" event={"ID":"b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476","Type":"ContainerDied","Data":"f8e57f05badb6dd1077ce8092832cefdc89aa3b9806da4c02277559df1970abe"} Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.673162 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e57f05badb6dd1077ce8092832cefdc89aa3b9806da4c02277559df1970abe" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.673226 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-bkzwl" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.687932 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-q4zxj" event={"ID":"9b003921-6331-4569-9771-82f7ccc92f84","Type":"ContainerDied","Data":"9bf135b2ebad4124f663d0e9705466915b719d8c0356f0788df2a781c500574e"} Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.687973 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9bf135b2ebad4124f663d0e9705466915b719d8c0356f0788df2a781c500574e" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.688027 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-q4zxj" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.695137 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-1741-account-create-update-fx2z9" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.704177 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-1741-account-create-update-fx2z9" event={"ID":"cb77d838-bf2d-4351-af21-b039e7fa1089","Type":"ContainerDied","Data":"7b5d46e9d3204d2ecaf7956de67740f5f9ba2c9afa2bf0f7f009330f82ecf035"} Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.704216 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b5d46e9d3204d2ecaf7956de67740f5f9ba2c9afa2bf0f7f009330f82ecf035" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.718023 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-b4894b95b-8zpbh"] Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.728515 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-b4894b95b-8zpbh"] Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.884069 4695 scope.go:117] "RemoveContainer" containerID="208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.907603 4695 scope.go:117] "RemoveContainer" containerID="7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33" Nov 26 13:46:16 crc kubenswrapper[4695]: E1126 13:46:16.908551 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33\": container with ID starting with 7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33 not found: ID does not exist" containerID="7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.908581 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33"} err="failed to get container status \"7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33\": rpc error: code = NotFound desc = could not find container \"7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33\": container with ID starting with 7420f8840c99c0791725263c5be544d1543258d7a2c25d095df507b61b5f7e33 not found: ID does not exist" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.908756 4695 scope.go:117] "RemoveContainer" containerID="208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46" Nov 26 13:46:16 crc kubenswrapper[4695]: E1126 13:46:16.908979 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46\": container with ID starting with 208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46 not found: ID does not exist" containerID="208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46" Nov 26 13:46:16 crc kubenswrapper[4695]: I1126 13:46:16.908999 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46"} err="failed to get container status \"208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46\": rpc error: code = NotFound desc = could not find container \"208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46\": container with ID starting with 208401176bd6e2cc6776763ea7322c50191c7a7b1e30cfec7a872ae49a291b46 not found: ID does not exist" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.151375 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.200042 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" path="/var/lib/kubelet/pods/fda8b0d7-85f5-4274-a12e-a09982b9fe3c/volumes" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.263602 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qr8tp\" (UniqueName: \"kubernetes.io/projected/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-kube-api-access-qr8tp\") pod \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.263869 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-operator-scripts\") pod \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\" (UID: \"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56\") " Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.265335 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56" (UID: "9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.268690 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-kube-api-access-qr8tp" (OuterVolumeSpecName: "kube-api-access-qr8tp") pod "9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56" (UID: "9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56"). InnerVolumeSpecName "kube-api-access-qr8tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.285203 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.295768 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.365268 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkmvn\" (UniqueName: \"kubernetes.io/projected/f4fa32a3-4bbb-432d-a986-920762192742-kube-api-access-lkmvn\") pod \"f4fa32a3-4bbb-432d-a986-920762192742\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.365497 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjdbc\" (UniqueName: \"kubernetes.io/projected/9db49627-d2bf-49a8-889c-fa076d76db84-kube-api-access-sjdbc\") pod \"9db49627-d2bf-49a8-889c-fa076d76db84\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.365574 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4fa32a3-4bbb-432d-a986-920762192742-operator-scripts\") pod \"f4fa32a3-4bbb-432d-a986-920762192742\" (UID: \"f4fa32a3-4bbb-432d-a986-920762192742\") " Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.365600 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db49627-d2bf-49a8-889c-fa076d76db84-operator-scripts\") pod \"9db49627-d2bf-49a8-889c-fa076d76db84\" (UID: \"9db49627-d2bf-49a8-889c-fa076d76db84\") " Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.365976 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.365995 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qr8tp\" (UniqueName: \"kubernetes.io/projected/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56-kube-api-access-qr8tp\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.366161 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fa32a3-4bbb-432d-a986-920762192742-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4fa32a3-4bbb-432d-a986-920762192742" (UID: "f4fa32a3-4bbb-432d-a986-920762192742"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.366199 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9db49627-d2bf-49a8-889c-fa076d76db84-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9db49627-d2bf-49a8-889c-fa076d76db84" (UID: "9db49627-d2bf-49a8-889c-fa076d76db84"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.369684 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fa32a3-4bbb-432d-a986-920762192742-kube-api-access-lkmvn" (OuterVolumeSpecName: "kube-api-access-lkmvn") pod "f4fa32a3-4bbb-432d-a986-920762192742" (UID: "f4fa32a3-4bbb-432d-a986-920762192742"). InnerVolumeSpecName "kube-api-access-lkmvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.371835 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9db49627-d2bf-49a8-889c-fa076d76db84-kube-api-access-sjdbc" (OuterVolumeSpecName: "kube-api-access-sjdbc") pod "9db49627-d2bf-49a8-889c-fa076d76db84" (UID: "9db49627-d2bf-49a8-889c-fa076d76db84"). InnerVolumeSpecName "kube-api-access-sjdbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.467926 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4fa32a3-4bbb-432d-a986-920762192742-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.467973 4695 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9db49627-d2bf-49a8-889c-fa076d76db84-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.467992 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkmvn\" (UniqueName: \"kubernetes.io/projected/f4fa32a3-4bbb-432d-a986-920762192742-kube-api-access-lkmvn\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.468009 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjdbc\" (UniqueName: \"kubernetes.io/projected/9db49627-d2bf-49a8-889c-fa076d76db84-kube-api-access-sjdbc\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.705552 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-1195-account-create-update-cft27" event={"ID":"9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56","Type":"ContainerDied","Data":"c34885c7dec7a3d7a9835283718096da0747dfcfb476bb689c2db162f4435c2b"} Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.705879 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c34885c7dec7a3d7a9835283718096da0747dfcfb476bb689c2db162f4435c2b" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.705603 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-1195-account-create-update-cft27" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.707028 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-5034-account-create-update-pfg6h" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.707028 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-5034-account-create-update-pfg6h" event={"ID":"9db49627-d2bf-49a8-889c-fa076d76db84","Type":"ContainerDied","Data":"f7893a3a9f3e18a7ef7cdb6eecdbbeb0f42e649ca0aafebedd908f27dedab15e"} Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.707235 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7893a3a9f3e18a7ef7cdb6eecdbbeb0f42e649ca0aafebedd908f27dedab15e" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.709850 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerStarted","Data":"24d6ad1293d13b9f779604f8373a681222e97fc5a9ea7b8342d82affd941f075"} Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.711013 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-tjhp9" event={"ID":"f4fa32a3-4bbb-432d-a986-920762192742","Type":"ContainerDied","Data":"784c610fcb5b574b24a04785c725c1e365a1237c04c180f9e3cc4654d5ca5c2c"} Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.711035 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="784c610fcb5b574b24a04785c725c1e365a1237c04c180f9e3cc4654d5ca5c2c" Nov 26 13:46:17 crc kubenswrapper[4695]: I1126 13:46:17.711083 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-tjhp9" Nov 26 13:46:18 crc kubenswrapper[4695]: I1126 13:46:18.004568 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 26 13:46:18 crc kubenswrapper[4695]: I1126 13:46:18.720927 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerStarted","Data":"123562bd6aedd7f2b7b53a06afe9373129208af99f5a73ea4379b45efdb1a8f0"} Nov 26 13:46:19 crc kubenswrapper[4695]: I1126 13:46:19.323273 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:19 crc kubenswrapper[4695]: I1126 13:46:19.732627 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerStarted","Data":"8828bb7f6835b9462a70898272fa408117c07aa7680fc303dbe5e7dab045fb07"} Nov 26 13:46:20 crc kubenswrapper[4695]: I1126 13:46:20.746335 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerStarted","Data":"733f930a67330efc3b7e971a6c4d06aed8428ef7f2152214c2c2140e214a33db"} Nov 26 13:46:20 crc kubenswrapper[4695]: I1126 13:46:20.746582 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-central-agent" containerID="cri-o://24d6ad1293d13b9f779604f8373a681222e97fc5a9ea7b8342d82affd941f075" gracePeriod=30 Nov 26 13:46:20 crc kubenswrapper[4695]: I1126 13:46:20.746674 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 13:46:20 crc kubenswrapper[4695]: I1126 13:46:20.747039 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="proxy-httpd" containerID="cri-o://733f930a67330efc3b7e971a6c4d06aed8428ef7f2152214c2c2140e214a33db" gracePeriod=30 Nov 26 13:46:20 crc kubenswrapper[4695]: I1126 13:46:20.747100 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="sg-core" containerID="cri-o://8828bb7f6835b9462a70898272fa408117c07aa7680fc303dbe5e7dab045fb07" gracePeriod=30 Nov 26 13:46:20 crc kubenswrapper[4695]: I1126 13:46:20.747147 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-notification-agent" containerID="cri-o://123562bd6aedd7f2b7b53a06afe9373129208af99f5a73ea4379b45efdb1a8f0" gracePeriod=30 Nov 26 13:46:20 crc kubenswrapper[4695]: I1126 13:46:20.781807 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.166452742 podStartE2EDuration="6.781785372s" podCreationTimestamp="2025-11-26 13:46:14 +0000 UTC" firstStartedPulling="2025-11-26 13:46:15.577911941 +0000 UTC m=+1359.213737023" lastFinishedPulling="2025-11-26 13:46:20.193244571 +0000 UTC m=+1363.829069653" observedRunningTime="2025-11-26 13:46:20.778907171 +0000 UTC m=+1364.414732273" watchObservedRunningTime="2025-11-26 13:46:20.781785372 +0000 UTC m=+1364.417610474" Nov 26 13:46:21 crc kubenswrapper[4695]: I1126 13:46:21.757984 4695 generic.go:334] "Generic (PLEG): container finished" podID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerID="733f930a67330efc3b7e971a6c4d06aed8428ef7f2152214c2c2140e214a33db" exitCode=0 Nov 26 13:46:21 crc kubenswrapper[4695]: I1126 13:46:21.758282 4695 generic.go:334] "Generic (PLEG): container finished" podID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerID="8828bb7f6835b9462a70898272fa408117c07aa7680fc303dbe5e7dab045fb07" exitCode=2 Nov 26 13:46:21 crc kubenswrapper[4695]: I1126 13:46:21.758294 4695 generic.go:334] "Generic (PLEG): container finished" podID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerID="123562bd6aedd7f2b7b53a06afe9373129208af99f5a73ea4379b45efdb1a8f0" exitCode=0 Nov 26 13:46:21 crc kubenswrapper[4695]: I1126 13:46:21.758053 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerDied","Data":"733f930a67330efc3b7e971a6c4d06aed8428ef7f2152214c2c2140e214a33db"} Nov 26 13:46:21 crc kubenswrapper[4695]: I1126 13:46:21.758338 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerDied","Data":"8828bb7f6835b9462a70898272fa408117c07aa7680fc303dbe5e7dab045fb07"} Nov 26 13:46:21 crc kubenswrapper[4695]: I1126 13:46:21.758373 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerDied","Data":"123562bd6aedd7f2b7b53a06afe9373129208af99f5a73ea4379b45efdb1a8f0"} Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.301785 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7vjhl"] Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302617 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9db49627-d2bf-49a8-889c-fa076d76db84" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302636 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9db49627-d2bf-49a8-889c-fa076d76db84" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302667 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb77d838-bf2d-4351-af21-b039e7fa1089" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302677 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb77d838-bf2d-4351-af21-b039e7fa1089" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302695 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon-log" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302703 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon-log" Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302723 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302731 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302743 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b003921-6331-4569-9771-82f7ccc92f84" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302751 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b003921-6331-4569-9771-82f7ccc92f84" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302769 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fa32a3-4bbb-432d-a986-920762192742" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302777 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fa32a3-4bbb-432d-a986-920762192742" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302802 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302810 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: E1126 13:46:23.302821 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.302829 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303046 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fa32a3-4bbb-432d-a986-920762192742" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303061 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="9db49627-d2bf-49a8-889c-fa076d76db84" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303080 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon-log" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303097 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303111 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303140 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b003921-6331-4569-9771-82f7ccc92f84" containerName="mariadb-database-create" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303152 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda8b0d7-85f5-4274-a12e-a09982b9fe3c" containerName="horizon" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.303163 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb77d838-bf2d-4351-af21-b039e7fa1089" containerName="mariadb-account-create-update" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.304037 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.306509 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.306770 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.306959 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jl79p" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.315709 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7vjhl"] Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.383395 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9fxm\" (UniqueName: \"kubernetes.io/projected/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-kube-api-access-f9fxm\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.383499 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.383534 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-scripts\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.383564 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-config-data\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.485043 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9fxm\" (UniqueName: \"kubernetes.io/projected/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-kube-api-access-f9fxm\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.485103 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.485127 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-scripts\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.485149 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-config-data\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.491531 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.500774 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-config-data\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.504169 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-scripts\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.504617 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9fxm\" (UniqueName: \"kubernetes.io/projected/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-kube-api-access-f9fxm\") pod \"nova-cell0-conductor-db-sync-7vjhl\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:23 crc kubenswrapper[4695]: I1126 13:46:23.629769 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:24 crc kubenswrapper[4695]: W1126 13:46:24.127123 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod056cfe2f_f427_4ab5_b43d_eac297c8cbd8.slice/crio-0d67350137a62af8b875a50d9f178d9e82f0e1070362d512ccb8533b6aaa6228 WatchSource:0}: Error finding container 0d67350137a62af8b875a50d9f178d9e82f0e1070362d512ccb8533b6aaa6228: Status 404 returned error can't find the container with id 0d67350137a62af8b875a50d9f178d9e82f0e1070362d512ccb8533b6aaa6228 Nov 26 13:46:24 crc kubenswrapper[4695]: I1126 13:46:24.130267 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7vjhl"] Nov 26 13:46:24 crc kubenswrapper[4695]: I1126 13:46:24.805807 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" event={"ID":"056cfe2f-f427-4ab5-b43d-eac297c8cbd8","Type":"ContainerStarted","Data":"0d67350137a62af8b875a50d9f178d9e82f0e1070362d512ccb8533b6aaa6228"} Nov 26 13:46:27 crc kubenswrapper[4695]: I1126 13:46:27.845444 4695 generic.go:334] "Generic (PLEG): container finished" podID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerID="24d6ad1293d13b9f779604f8373a681222e97fc5a9ea7b8342d82affd941f075" exitCode=0 Nov 26 13:46:27 crc kubenswrapper[4695]: I1126 13:46:27.845552 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerDied","Data":"24d6ad1293d13b9f779604f8373a681222e97fc5a9ea7b8342d82affd941f075"} Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.284238 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.284755 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-log" containerID="cri-o://d7814041a442373424a893c2909448005d29f818124b28d7bdfdd3db02375560" gracePeriod=30 Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.284861 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-httpd" containerID="cri-o://a6447666e61865216259c7f6f5d4e32a29d90e7d887dd0c31dbb838eea65a003" gracePeriod=30 Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.892601 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc17e7a8-250d-4902-9829-59e4e9d9e258","Type":"ContainerDied","Data":"150055a756d34f9e4baea0a688b32dbe99bc0c000662a12669a45c7e1635e9d8"} Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.893188 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="150055a756d34f9e4baea0a688b32dbe99bc0c000662a12669a45c7e1635e9d8" Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.896204 4695 generic.go:334] "Generic (PLEG): container finished" podID="560d175c-207c-4842-bbc0-64852bc173d6" containerID="d7814041a442373424a893c2909448005d29f818124b28d7bdfdd3db02375560" exitCode=143 Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.896275 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"560d175c-207c-4842-bbc0-64852bc173d6","Type":"ContainerDied","Data":"d7814041a442373424a893c2909448005d29f818124b28d7bdfdd3db02375560"} Nov 26 13:46:31 crc kubenswrapper[4695]: I1126 13:46:31.938308 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.094281 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-log-httpd\") pod \"cc17e7a8-250d-4902-9829-59e4e9d9e258\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.094975 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc17e7a8-250d-4902-9829-59e4e9d9e258" (UID: "cc17e7a8-250d-4902-9829-59e4e9d9e258"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.095187 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-sg-core-conf-yaml\") pod \"cc17e7a8-250d-4902-9829-59e4e9d9e258\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.095322 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-combined-ca-bundle\") pod \"cc17e7a8-250d-4902-9829-59e4e9d9e258\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.095453 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-config-data\") pod \"cc17e7a8-250d-4902-9829-59e4e9d9e258\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.095763 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bm6j\" (UniqueName: \"kubernetes.io/projected/cc17e7a8-250d-4902-9829-59e4e9d9e258-kube-api-access-9bm6j\") pod \"cc17e7a8-250d-4902-9829-59e4e9d9e258\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.096160 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-run-httpd\") pod \"cc17e7a8-250d-4902-9829-59e4e9d9e258\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.096283 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-scripts\") pod \"cc17e7a8-250d-4902-9829-59e4e9d9e258\" (UID: \"cc17e7a8-250d-4902-9829-59e4e9d9e258\") " Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.096589 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc17e7a8-250d-4902-9829-59e4e9d9e258" (UID: "cc17e7a8-250d-4902-9829-59e4e9d9e258"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.097572 4695 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.097689 4695 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc17e7a8-250d-4902-9829-59e4e9d9e258-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.100103 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-scripts" (OuterVolumeSpecName: "scripts") pod "cc17e7a8-250d-4902-9829-59e4e9d9e258" (UID: "cc17e7a8-250d-4902-9829-59e4e9d9e258"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.100250 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc17e7a8-250d-4902-9829-59e4e9d9e258-kube-api-access-9bm6j" (OuterVolumeSpecName: "kube-api-access-9bm6j") pod "cc17e7a8-250d-4902-9829-59e4e9d9e258" (UID: "cc17e7a8-250d-4902-9829-59e4e9d9e258"). InnerVolumeSpecName "kube-api-access-9bm6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.134884 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.135117 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-log" containerID="cri-o://13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b" gracePeriod=30 Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.135250 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-httpd" containerID="cri-o://24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86" gracePeriod=30 Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.144413 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc17e7a8-250d-4902-9829-59e4e9d9e258" (UID: "cc17e7a8-250d-4902-9829-59e4e9d9e258"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.181915 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc17e7a8-250d-4902-9829-59e4e9d9e258" (UID: "cc17e7a8-250d-4902-9829-59e4e9d9e258"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.200033 4695 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.200071 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.200085 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bm6j\" (UniqueName: \"kubernetes.io/projected/cc17e7a8-250d-4902-9829-59e4e9d9e258-kube-api-access-9bm6j\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.200099 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.217818 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-config-data" (OuterVolumeSpecName: "config-data") pod "cc17e7a8-250d-4902-9829-59e4e9d9e258" (UID: "cc17e7a8-250d-4902-9829-59e4e9d9e258"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.302324 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc17e7a8-250d-4902-9829-59e4e9d9e258-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.905114 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" event={"ID":"056cfe2f-f427-4ab5-b43d-eac297c8cbd8","Type":"ContainerStarted","Data":"e7118dbebb9d508125bb3d9e5c851b89e9fb9c3c8a32846337bad8ca0fb3eb2c"} Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.907761 4695 generic.go:334] "Generic (PLEG): container finished" podID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerID="13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b" exitCode=143 Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.907882 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.907889 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7bc51300-b52c-4dc8-b337-1b7a15539971","Type":"ContainerDied","Data":"13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b"} Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.922671 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" podStartSLOduration=2.200210115 podStartE2EDuration="9.922656035s" podCreationTimestamp="2025-11-26 13:46:23 +0000 UTC" firstStartedPulling="2025-11-26 13:46:24.129284005 +0000 UTC m=+1367.765109087" lastFinishedPulling="2025-11-26 13:46:31.851729925 +0000 UTC m=+1375.487555007" observedRunningTime="2025-11-26 13:46:32.920034212 +0000 UTC m=+1376.555859314" watchObservedRunningTime="2025-11-26 13:46:32.922656035 +0000 UTC m=+1376.558481117" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.940782 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.951723 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.963471 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:32 crc kubenswrapper[4695]: E1126 13:46:32.964012 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-central-agent" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.964133 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-central-agent" Nov 26 13:46:32 crc kubenswrapper[4695]: E1126 13:46:32.964223 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-notification-agent" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.964287 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-notification-agent" Nov 26 13:46:32 crc kubenswrapper[4695]: E1126 13:46:32.964389 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="sg-core" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.964454 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="sg-core" Nov 26 13:46:32 crc kubenswrapper[4695]: E1126 13:46:32.964511 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="proxy-httpd" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.964574 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="proxy-httpd" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.964826 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="sg-core" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.964895 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-central-agent" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.964959 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="proxy-httpd" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.965027 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" containerName="ceilometer-notification-agent" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.970701 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.974802 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.977624 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:46:32 crc kubenswrapper[4695]: I1126 13:46:32.982191 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.058914 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:33 crc kubenswrapper[4695]: E1126 13:46:33.059574 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-6qvzb log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="3c8b25d7-4bb1-4f17-851e-c0fac09e9660" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.118670 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-run-httpd\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.118743 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qvzb\" (UniqueName: \"kubernetes.io/projected/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-kube-api-access-6qvzb\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.118773 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-config-data\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.118798 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.119101 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.119178 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-log-httpd\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.119225 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-scripts\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.173340 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc17e7a8-250d-4902-9829-59e4e9d9e258" path="/var/lib/kubelet/pods/cc17e7a8-250d-4902-9829-59e4e9d9e258/volumes" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.220509 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.220546 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-log-httpd\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.220572 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-scripts\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.220596 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-run-httpd\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.220635 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qvzb\" (UniqueName: \"kubernetes.io/projected/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-kube-api-access-6qvzb\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.220673 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-config-data\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.220699 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.222200 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-run-httpd\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.222445 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-log-httpd\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.227564 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.239992 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-scripts\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.244214 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-config-data\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.264002 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.267568 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qvzb\" (UniqueName: \"kubernetes.io/projected/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-kube-api-access-6qvzb\") pod \"ceilometer-0\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.918782 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:33 crc kubenswrapper[4695]: I1126 13:46:33.937107 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.042501 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qvzb\" (UniqueName: \"kubernetes.io/projected/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-kube-api-access-6qvzb\") pod \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.042735 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-config-data\") pod \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.042787 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-log-httpd\") pod \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.042812 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-run-httpd\") pod \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.042904 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-scripts\") pod \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.042943 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-combined-ca-bundle\") pod \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.043033 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-sg-core-conf-yaml\") pod \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\" (UID: \"3c8b25d7-4bb1-4f17-851e-c0fac09e9660\") " Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.043938 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3c8b25d7-4bb1-4f17-851e-c0fac09e9660" (UID: "3c8b25d7-4bb1-4f17-851e-c0fac09e9660"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.047635 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-config-data" (OuterVolumeSpecName: "config-data") pod "3c8b25d7-4bb1-4f17-851e-c0fac09e9660" (UID: "3c8b25d7-4bb1-4f17-851e-c0fac09e9660"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.047921 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3c8b25d7-4bb1-4f17-851e-c0fac09e9660" (UID: "3c8b25d7-4bb1-4f17-851e-c0fac09e9660"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.055519 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3c8b25d7-4bb1-4f17-851e-c0fac09e9660" (UID: "3c8b25d7-4bb1-4f17-851e-c0fac09e9660"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.055568 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c8b25d7-4bb1-4f17-851e-c0fac09e9660" (UID: "3c8b25d7-4bb1-4f17-851e-c0fac09e9660"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.058163 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-scripts" (OuterVolumeSpecName: "scripts") pod "3c8b25d7-4bb1-4f17-851e-c0fac09e9660" (UID: "3c8b25d7-4bb1-4f17-851e-c0fac09e9660"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.074918 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-kube-api-access-6qvzb" (OuterVolumeSpecName: "kube-api-access-6qvzb") pod "3c8b25d7-4bb1-4f17-851e-c0fac09e9660" (UID: "3c8b25d7-4bb1-4f17-851e-c0fac09e9660"). InnerVolumeSpecName "kube-api-access-6qvzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.145628 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.145661 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.145673 4695 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.145681 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qvzb\" (UniqueName: \"kubernetes.io/projected/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-kube-api-access-6qvzb\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.145690 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.145699 4695 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.145708 4695 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3c8b25d7-4bb1-4f17-851e-c0fac09e9660-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.927809 4695 generic.go:334] "Generic (PLEG): container finished" podID="560d175c-207c-4842-bbc0-64852bc173d6" containerID="a6447666e61865216259c7f6f5d4e32a29d90e7d887dd0c31dbb838eea65a003" exitCode=0 Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.927937 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"560d175c-207c-4842-bbc0-64852bc173d6","Type":"ContainerDied","Data":"a6447666e61865216259c7f6f5d4e32a29d90e7d887dd0c31dbb838eea65a003"} Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.928246 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"560d175c-207c-4842-bbc0-64852bc173d6","Type":"ContainerDied","Data":"095062ba25eae08a0bf5682e602ff2cab33e1302123e723d592ae3758dec3f95"} Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.928274 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="095062ba25eae08a0bf5682e602ff2cab33e1302123e723d592ae3758dec3f95" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.928259 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:34 crc kubenswrapper[4695]: I1126 13:46:34.940524 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.035393 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.041411 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.050993 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:35 crc kubenswrapper[4695]: E1126 13:46:35.051493 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-httpd" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.051515 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-httpd" Nov 26 13:46:35 crc kubenswrapper[4695]: E1126 13:46:35.051546 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-log" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.051556 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-log" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.051770 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-log" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.051799 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="560d175c-207c-4842-bbc0-64852bc173d6" containerName="glance-httpd" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.053871 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.057031 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.057270 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.058620 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074303 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkvkt\" (UniqueName: \"kubernetes.io/projected/560d175c-207c-4842-bbc0-64852bc173d6-kube-api-access-xkvkt\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074358 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-config-data\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074514 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-public-tls-certs\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074538 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-logs\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074570 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-httpd-run\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074587 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074643 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-combined-ca-bundle\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.074684 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-scripts\") pod \"560d175c-207c-4842-bbc0-64852bc173d6\" (UID: \"560d175c-207c-4842-bbc0-64852bc173d6\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.075921 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-logs" (OuterVolumeSpecName: "logs") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.079656 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.081852 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/560d175c-207c-4842-bbc0-64852bc173d6-kube-api-access-xkvkt" (OuterVolumeSpecName: "kube-api-access-xkvkt") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "kube-api-access-xkvkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.087874 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.098010 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-scripts" (OuterVolumeSpecName: "scripts") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.123881 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.159405 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-config-data" (OuterVolumeSpecName: "config-data") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.173736 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8b25d7-4bb1-4f17-851e-c0fac09e9660" path="/var/lib/kubelet/pods/3c8b25d7-4bb1-4f17-851e-c0fac09e9660/volumes" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.176773 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.176848 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-config-data\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.176908 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-scripts\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.176991 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177099 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-log-httpd\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177124 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-run-httpd\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177186 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5j69\" (UniqueName: \"kubernetes.io/projected/416de75f-33b6-4864-9d33-799fb2413609-kube-api-access-r5j69\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177294 4695 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177338 4695 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177371 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177383 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177393 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkvkt\" (UniqueName: \"kubernetes.io/projected/560d175c-207c-4842-bbc0-64852bc173d6-kube-api-access-xkvkt\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177402 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.177411 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/560d175c-207c-4842-bbc0-64852bc173d6-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.188692 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "560d175c-207c-4842-bbc0-64852bc173d6" (UID: "560d175c-207c-4842-bbc0-64852bc173d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.196970 4695 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.278723 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-log-httpd\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.278781 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-run-httpd\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.278814 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5j69\" (UniqueName: \"kubernetes.io/projected/416de75f-33b6-4864-9d33-799fb2413609-kube-api-access-r5j69\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.278902 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.278945 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-config-data\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.278972 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-scripts\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.279033 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.279119 4695 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/560d175c-207c-4842-bbc0-64852bc173d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.279135 4695 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.280792 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-log-httpd\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.280919 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-run-httpd\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.284326 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.284398 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.284512 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-config-data\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.286148 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-scripts\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.298133 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5j69\" (UniqueName: \"kubernetes.io/projected/416de75f-33b6-4864-9d33-799fb2413609-kube-api-access-r5j69\") pod \"ceilometer-0\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.373731 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.762434 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.855755 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892326 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-logs\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892451 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892542 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-httpd-run\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892666 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-internal-tls-certs\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892753 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-scripts\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892819 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-config-data\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892857 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-combined-ca-bundle\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892889 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5hcq\" (UniqueName: \"kubernetes.io/projected/7bc51300-b52c-4dc8-b337-1b7a15539971-kube-api-access-v5hcq\") pod \"7bc51300-b52c-4dc8-b337-1b7a15539971\" (UID: \"7bc51300-b52c-4dc8-b337-1b7a15539971\") " Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.892971 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-logs" (OuterVolumeSpecName: "logs") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.893317 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.893598 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.898478 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.898513 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-scripts" (OuterVolumeSpecName: "scripts") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.901228 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc51300-b52c-4dc8-b337-1b7a15539971-kube-api-access-v5hcq" (OuterVolumeSpecName: "kube-api-access-v5hcq") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "kube-api-access-v5hcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.927800 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.942324 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerStarted","Data":"ddbb54b50158c9204c8638a53398d4210d6e0901a5b941aae099481852086948"} Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.948848 4695 generic.go:334] "Generic (PLEG): container finished" podID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerID="24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86" exitCode=0 Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.948929 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.948915 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7bc51300-b52c-4dc8-b337-1b7a15539971","Type":"ContainerDied","Data":"24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86"} Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.948975 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"7bc51300-b52c-4dc8-b337-1b7a15539971","Type":"ContainerDied","Data":"b0240d38006870c9ac6cb645a05a689c6f706ae6c931a2aa2cf994dd321f8de8"} Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.948979 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.948993 4695 scope.go:117] "RemoveContainer" containerID="24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.962780 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.967531 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-config-data" (OuterVolumeSpecName: "config-data") pod "7bc51300-b52c-4dc8-b337-1b7a15539971" (UID: "7bc51300-b52c-4dc8-b337-1b7a15539971"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:35 crc kubenswrapper[4695]: I1126 13:46:35.995902 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:35.997682 4695 scope.go:117] "RemoveContainer" containerID="13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.002854 4695 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7bc51300-b52c-4dc8-b337-1b7a15539971-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.005100 4695 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.005119 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.005137 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.005151 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bc51300-b52c-4dc8-b337-1b7a15539971-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.005162 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5hcq\" (UniqueName: \"kubernetes.io/projected/7bc51300-b52c-4dc8-b337-1b7a15539971-kube-api-access-v5hcq\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.005193 4695 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.017253 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.025575 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: E1126 13:46:36.026035 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-httpd" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.026055 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-httpd" Nov 26 13:46:36 crc kubenswrapper[4695]: E1126 13:46:36.026076 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-log" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.026082 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-log" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.026250 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-log" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.026270 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" containerName="glance-httpd" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.027176 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.029745 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.029940 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.030693 4695 scope.go:117] "RemoveContainer" containerID="24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86" Nov 26 13:46:36 crc kubenswrapper[4695]: E1126 13:46:36.031326 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86\": container with ID starting with 24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86 not found: ID does not exist" containerID="24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.031467 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86"} err="failed to get container status \"24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86\": rpc error: code = NotFound desc = could not find container \"24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86\": container with ID starting with 24792919d27c6752e601b05e9eee5f131b0de153c76f8ea3275dc8b8e0b22d86 not found: ID does not exist" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.031524 4695 scope.go:117] "RemoveContainer" containerID="13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b" Nov 26 13:46:36 crc kubenswrapper[4695]: E1126 13:46:36.032226 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b\": container with ID starting with 13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b not found: ID does not exist" containerID="13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.032265 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b"} err="failed to get container status \"13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b\": rpc error: code = NotFound desc = could not find container \"13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b\": container with ID starting with 13eadbad4ddf6dddd3b7766139a97adc3af7e0ad595a750b710cc73654abe96b not found: ID does not exist" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.036926 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.076164 4695 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.107614 4695 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.208999 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.209049 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a16aeb-231a-4012-9aed-ab91a1fab41e-logs\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.209225 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct565\" (UniqueName: \"kubernetes.io/projected/f5a16aeb-231a-4012-9aed-ab91a1fab41e-kube-api-access-ct565\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.209287 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5a16aeb-231a-4012-9aed-ab91a1fab41e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.209462 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.209509 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.209540 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.209618 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.285382 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.294394 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.310311 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311195 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311230 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311252 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311294 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311331 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311370 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a16aeb-231a-4012-9aed-ab91a1fab41e-logs\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311428 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct565\" (UniqueName: \"kubernetes.io/projected/f5a16aeb-231a-4012-9aed-ab91a1fab41e-kube-api-access-ct565\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311450 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5a16aeb-231a-4012-9aed-ab91a1fab41e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311661 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.311902 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5a16aeb-231a-4012-9aed-ab91a1fab41e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.312054 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.312127 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5a16aeb-231a-4012-9aed-ab91a1fab41e-logs\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.316929 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-scripts\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.318320 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.321133 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.328608 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5a16aeb-231a-4012-9aed-ab91a1fab41e-config-data\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.347079 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.349385 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.350113 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.358020 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.371248 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct565\" (UniqueName: \"kubernetes.io/projected/f5a16aeb-231a-4012-9aed-ab91a1fab41e-kube-api-access-ct565\") pod \"glance-default-external-api-0\" (UID: \"f5a16aeb-231a-4012-9aed-ab91a1fab41e\") " pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.396906 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.396950 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.396987 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.397803 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d704a070a53ff2c48f0d0e2fcd3340ab686270f91e63ed80bdf2afa4d7bc31e8"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.397870 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://d704a070a53ff2c48f0d0e2fcd3340ab686270f91e63ed80bdf2afa4d7bc31e8" gracePeriod=600 Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.412991 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.413071 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d83aab2a-dcf3-44a5-9616-e19d698ea43d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.413115 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.413147 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4nq8\" (UniqueName: \"kubernetes.io/projected/d83aab2a-dcf3-44a5-9616-e19d698ea43d-kube-api-access-s4nq8\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.413214 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.413244 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.413274 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.413438 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83aab2a-dcf3-44a5-9616-e19d698ea43d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515012 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4nq8\" (UniqueName: \"kubernetes.io/projected/d83aab2a-dcf3-44a5-9616-e19d698ea43d-kube-api-access-s4nq8\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515119 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515158 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515190 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515298 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83aab2a-dcf3-44a5-9616-e19d698ea43d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515360 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515407 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d83aab2a-dcf3-44a5-9616-e19d698ea43d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515442 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515527 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.515942 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83aab2a-dcf3-44a5-9616-e19d698ea43d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.516113 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d83aab2a-dcf3-44a5-9616-e19d698ea43d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.519635 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.520223 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.520668 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.534110 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d83aab2a-dcf3-44a5-9616-e19d698ea43d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.534528 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4nq8\" (UniqueName: \"kubernetes.io/projected/d83aab2a-dcf3-44a5-9616-e19d698ea43d-kube-api-access-s4nq8\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.546959 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"d83aab2a-dcf3-44a5-9616-e19d698ea43d\") " pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.644488 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.741088 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.964609 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="d704a070a53ff2c48f0d0e2fcd3340ab686270f91e63ed80bdf2afa4d7bc31e8" exitCode=0 Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.964910 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"d704a070a53ff2c48f0d0e2fcd3340ab686270f91e63ed80bdf2afa4d7bc31e8"} Nov 26 13:46:36 crc kubenswrapper[4695]: I1126 13:46:36.964942 4695 scope.go:117] "RemoveContainer" containerID="d5ada8ee218c5b0e5eb69bc5a99a08479c0f37839560e96a3518e7444c465fbd" Nov 26 13:46:37 crc kubenswrapper[4695]: I1126 13:46:37.179578 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="560d175c-207c-4842-bbc0-64852bc173d6" path="/var/lib/kubelet/pods/560d175c-207c-4842-bbc0-64852bc173d6/volumes" Nov 26 13:46:37 crc kubenswrapper[4695]: I1126 13:46:37.180722 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bc51300-b52c-4dc8-b337-1b7a15539971" path="/var/lib/kubelet/pods/7bc51300-b52c-4dc8-b337-1b7a15539971/volumes" Nov 26 13:46:37 crc kubenswrapper[4695]: I1126 13:46:37.355472 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 26 13:46:37 crc kubenswrapper[4695]: W1126 13:46:37.357334 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5a16aeb_231a_4012_9aed_ab91a1fab41e.slice/crio-70492292e0eb43430288ddc5397d0ca747eef5a992ffcdda68d69d25f98e48bc WatchSource:0}: Error finding container 70492292e0eb43430288ddc5397d0ca747eef5a992ffcdda68d69d25f98e48bc: Status 404 returned error can't find the container with id 70492292e0eb43430288ddc5397d0ca747eef5a992ffcdda68d69d25f98e48bc Nov 26 13:46:37 crc kubenswrapper[4695]: I1126 13:46:37.509278 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 26 13:46:37 crc kubenswrapper[4695]: W1126 13:46:37.518724 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd83aab2a_dcf3_44a5_9616_e19d698ea43d.slice/crio-e0328c04f5143429e3cf142075086b27a9fbda7bd0beea5af0316fa7443cb1ed WatchSource:0}: Error finding container e0328c04f5143429e3cf142075086b27a9fbda7bd0beea5af0316fa7443cb1ed: Status 404 returned error can't find the container with id e0328c04f5143429e3cf142075086b27a9fbda7bd0beea5af0316fa7443cb1ed Nov 26 13:46:38 crc kubenswrapper[4695]: I1126 13:46:37.999931 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5a16aeb-231a-4012-9aed-ab91a1fab41e","Type":"ContainerStarted","Data":"47e746f4f8c27b151fd0152fb0d1099f2c644a3d3d252c10dbe37ff97fc8e989"} Nov 26 13:46:38 crc kubenswrapper[4695]: I1126 13:46:38.001189 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5a16aeb-231a-4012-9aed-ab91a1fab41e","Type":"ContainerStarted","Data":"70492292e0eb43430288ddc5397d0ca747eef5a992ffcdda68d69d25f98e48bc"} Nov 26 13:46:38 crc kubenswrapper[4695]: I1126 13:46:38.003207 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerStarted","Data":"f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221"} Nov 26 13:46:38 crc kubenswrapper[4695]: I1126 13:46:38.004157 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d83aab2a-dcf3-44a5-9616-e19d698ea43d","Type":"ContainerStarted","Data":"e0328c04f5143429e3cf142075086b27a9fbda7bd0beea5af0316fa7443cb1ed"} Nov 26 13:46:38 crc kubenswrapper[4695]: I1126 13:46:38.005890 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306"} Nov 26 13:46:39 crc kubenswrapper[4695]: I1126 13:46:39.024154 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerStarted","Data":"28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec"} Nov 26 13:46:39 crc kubenswrapper[4695]: I1126 13:46:39.027378 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d83aab2a-dcf3-44a5-9616-e19d698ea43d","Type":"ContainerStarted","Data":"6ae227c99f661ad9a6ed23f9ff07e67f90a8c98d814570b26bf2594f609f239f"} Nov 26 13:46:39 crc kubenswrapper[4695]: I1126 13:46:39.027472 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d83aab2a-dcf3-44a5-9616-e19d698ea43d","Type":"ContainerStarted","Data":"8154152d7d228d7f0b3a694f8a0c9ee1de2d3fb03fbf2337c0c84f7079eb7180"} Nov 26 13:46:39 crc kubenswrapper[4695]: I1126 13:46:39.029772 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f5a16aeb-231a-4012-9aed-ab91a1fab41e","Type":"ContainerStarted","Data":"2920ddbd7457d7a547d700dd8d046cc52dbc2a28c94aa7efc50db7d16cd48909"} Nov 26 13:46:39 crc kubenswrapper[4695]: I1126 13:46:39.054641 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.054620762 podStartE2EDuration="3.054620762s" podCreationTimestamp="2025-11-26 13:46:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:46:39.052897647 +0000 UTC m=+1382.688722739" watchObservedRunningTime="2025-11-26 13:46:39.054620762 +0000 UTC m=+1382.690445844" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.674158 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.67413174 podStartE2EDuration="5.67413174s" podCreationTimestamp="2025-11-26 13:46:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:46:39.072720278 +0000 UTC m=+1382.708545360" watchObservedRunningTime="2025-11-26 13:46:40.67413174 +0000 UTC m=+1384.309956842" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.688600 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b46mq"] Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.691839 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.701962 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b46mq"] Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.796209 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmz9g\" (UniqueName: \"kubernetes.io/projected/7c604cd9-92a3-4327-bde2-7f8706d8ca99-kube-api-access-bmz9g\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.796265 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-utilities\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.796313 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-catalog-content\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.898716 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmz9g\" (UniqueName: \"kubernetes.io/projected/7c604cd9-92a3-4327-bde2-7f8706d8ca99-kube-api-access-bmz9g\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.898807 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-utilities\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.898872 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-catalog-content\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.899643 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-catalog-content\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.899681 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-utilities\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:40 crc kubenswrapper[4695]: I1126 13:46:40.919635 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmz9g\" (UniqueName: \"kubernetes.io/projected/7c604cd9-92a3-4327-bde2-7f8706d8ca99-kube-api-access-bmz9g\") pod \"redhat-marketplace-b46mq\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:41 crc kubenswrapper[4695]: I1126 13:46:41.010974 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:41 crc kubenswrapper[4695]: I1126 13:46:41.056407 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerStarted","Data":"1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281"} Nov 26 13:46:41 crc kubenswrapper[4695]: I1126 13:46:41.471657 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b46mq"] Nov 26 13:46:42 crc kubenswrapper[4695]: I1126 13:46:42.068565 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerStarted","Data":"acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f"} Nov 26 13:46:42 crc kubenswrapper[4695]: I1126 13:46:42.068909 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 13:46:42 crc kubenswrapper[4695]: I1126 13:46:42.080899 4695 generic.go:334] "Generic (PLEG): container finished" podID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerID="a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a" exitCode=0 Nov 26 13:46:42 crc kubenswrapper[4695]: I1126 13:46:42.080940 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b46mq" event={"ID":"7c604cd9-92a3-4327-bde2-7f8706d8ca99","Type":"ContainerDied","Data":"a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a"} Nov 26 13:46:42 crc kubenswrapper[4695]: I1126 13:46:42.080959 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b46mq" event={"ID":"7c604cd9-92a3-4327-bde2-7f8706d8ca99","Type":"ContainerStarted","Data":"6add436dc9d80d9eaac19c9e62fdba7561f8740581736e91cbca1a8b3471831f"} Nov 26 13:46:42 crc kubenswrapper[4695]: I1126 13:46:42.092208 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.494060582 podStartE2EDuration="8.09218068s" podCreationTimestamp="2025-11-26 13:46:34 +0000 UTC" firstStartedPulling="2025-11-26 13:46:35.881570117 +0000 UTC m=+1379.517395199" lastFinishedPulling="2025-11-26 13:46:41.479690185 +0000 UTC m=+1385.115515297" observedRunningTime="2025-11-26 13:46:42.088921026 +0000 UTC m=+1385.724746118" watchObservedRunningTime="2025-11-26 13:46:42.09218068 +0000 UTC m=+1385.728005762" Nov 26 13:46:43 crc kubenswrapper[4695]: I1126 13:46:43.090078 4695 generic.go:334] "Generic (PLEG): container finished" podID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerID="8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7" exitCode=0 Nov 26 13:46:43 crc kubenswrapper[4695]: I1126 13:46:43.090157 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b46mq" event={"ID":"7c604cd9-92a3-4327-bde2-7f8706d8ca99","Type":"ContainerDied","Data":"8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7"} Nov 26 13:46:44 crc kubenswrapper[4695]: I1126 13:46:44.102014 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b46mq" event={"ID":"7c604cd9-92a3-4327-bde2-7f8706d8ca99","Type":"ContainerStarted","Data":"ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b"} Nov 26 13:46:44 crc kubenswrapper[4695]: I1126 13:46:44.132648 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b46mq" podStartSLOduration=2.55705904 podStartE2EDuration="4.132626279s" podCreationTimestamp="2025-11-26 13:46:40 +0000 UTC" firstStartedPulling="2025-11-26 13:46:42.098212491 +0000 UTC m=+1385.734037573" lastFinishedPulling="2025-11-26 13:46:43.67377972 +0000 UTC m=+1387.309604812" observedRunningTime="2025-11-26 13:46:44.122147985 +0000 UTC m=+1387.757973087" watchObservedRunningTime="2025-11-26 13:46:44.132626279 +0000 UTC m=+1387.768451361" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.120020 4695 generic.go:334] "Generic (PLEG): container finished" podID="056cfe2f-f427-4ab5-b43d-eac297c8cbd8" containerID="e7118dbebb9d508125bb3d9e5c851b89e9fb9c3c8a32846337bad8ca0fb3eb2c" exitCode=0 Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.120332 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" event={"ID":"056cfe2f-f427-4ab5-b43d-eac297c8cbd8","Type":"ContainerDied","Data":"e7118dbebb9d508125bb3d9e5c851b89e9fb9c3c8a32846337bad8ca0fb3eb2c"} Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.646025 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.646388 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.684614 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.699650 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.742239 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.742403 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.770383 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:46 crc kubenswrapper[4695]: I1126 13:46:46.782446 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.129375 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.129409 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.129421 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.129429 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.474989 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.612584 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-combined-ca-bundle\") pod \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.612686 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-scripts\") pod \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.612732 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-config-data\") pod \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.612812 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9fxm\" (UniqueName: \"kubernetes.io/projected/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-kube-api-access-f9fxm\") pod \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\" (UID: \"056cfe2f-f427-4ab5-b43d-eac297c8cbd8\") " Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.632205 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-kube-api-access-f9fxm" (OuterVolumeSpecName: "kube-api-access-f9fxm") pod "056cfe2f-f427-4ab5-b43d-eac297c8cbd8" (UID: "056cfe2f-f427-4ab5-b43d-eac297c8cbd8"). InnerVolumeSpecName "kube-api-access-f9fxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.632837 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-scripts" (OuterVolumeSpecName: "scripts") pod "056cfe2f-f427-4ab5-b43d-eac297c8cbd8" (UID: "056cfe2f-f427-4ab5-b43d-eac297c8cbd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.661153 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "056cfe2f-f427-4ab5-b43d-eac297c8cbd8" (UID: "056cfe2f-f427-4ab5-b43d-eac297c8cbd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.662042 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-config-data" (OuterVolumeSpecName: "config-data") pod "056cfe2f-f427-4ab5-b43d-eac297c8cbd8" (UID: "056cfe2f-f427-4ab5-b43d-eac297c8cbd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.714966 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.715259 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.715364 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:47 crc kubenswrapper[4695]: I1126 13:46:47.715455 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9fxm\" (UniqueName: \"kubernetes.io/projected/056cfe2f-f427-4ab5-b43d-eac297c8cbd8-kube-api-access-f9fxm\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.138938 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" event={"ID":"056cfe2f-f427-4ab5-b43d-eac297c8cbd8","Type":"ContainerDied","Data":"0d67350137a62af8b875a50d9f178d9e82f0e1070362d512ccb8533b6aaa6228"} Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.140500 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d67350137a62af8b875a50d9f178d9e82f0e1070362d512ccb8533b6aaa6228" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.139108 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7vjhl" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.258083 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 13:46:48 crc kubenswrapper[4695]: E1126 13:46:48.258862 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="056cfe2f-f427-4ab5-b43d-eac297c8cbd8" containerName="nova-cell0-conductor-db-sync" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.258957 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="056cfe2f-f427-4ab5-b43d-eac297c8cbd8" containerName="nova-cell0-conductor-db-sync" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.259248 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="056cfe2f-f427-4ab5-b43d-eac297c8cbd8" containerName="nova-cell0-conductor-db-sync" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.259952 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.262982 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-jl79p" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.265753 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.270207 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.329800 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.332835 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.334266 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdj8d\" (UniqueName: \"kubernetes.io/projected/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-kube-api-access-bdj8d\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.436193 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.436299 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdj8d\" (UniqueName: \"kubernetes.io/projected/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-kube-api-access-bdj8d\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.436388 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.451737 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.453241 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.454977 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdj8d\" (UniqueName: \"kubernetes.io/projected/5a3fc356-06d7-4e26-bcfb-c610dc6e02be-kube-api-access-bdj8d\") pod \"nova-cell0-conductor-0\" (UID: \"5a3fc356-06d7-4e26-bcfb-c610dc6e02be\") " pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:48 crc kubenswrapper[4695]: I1126 13:46:48.595224 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:49 crc kubenswrapper[4695]: I1126 13:46:49.087473 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 26 13:46:49 crc kubenswrapper[4695]: I1126 13:46:49.133532 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 13:46:49 crc kubenswrapper[4695]: I1126 13:46:49.144565 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:49 crc kubenswrapper[4695]: I1126 13:46:49.147913 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 26 13:46:49 crc kubenswrapper[4695]: I1126 13:46:49.163258 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5a3fc356-06d7-4e26-bcfb-c610dc6e02be","Type":"ContainerStarted","Data":"bdd0a5f6d55e0852c9130b7dfd888c953b00313b7385e7b9201128c9660dd080"} Nov 26 13:46:49 crc kubenswrapper[4695]: I1126 13:46:49.163623 4695 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 26 13:46:49 crc kubenswrapper[4695]: I1126 13:46:49.219373 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 26 13:46:50 crc kubenswrapper[4695]: I1126 13:46:50.174175 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5a3fc356-06d7-4e26-bcfb-c610dc6e02be","Type":"ContainerStarted","Data":"677c6f20a8b2b185ce5e7d785247547fcc04ec3b06638b43e77769e26677e631"} Nov 26 13:46:50 crc kubenswrapper[4695]: I1126 13:46:50.175436 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:50 crc kubenswrapper[4695]: I1126 13:46:50.189259 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.189239465 podStartE2EDuration="2.189239465s" podCreationTimestamp="2025-11-26 13:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:46:50.187131418 +0000 UTC m=+1393.822956510" watchObservedRunningTime="2025-11-26 13:46:50.189239465 +0000 UTC m=+1393.825064547" Nov 26 13:46:51 crc kubenswrapper[4695]: I1126 13:46:51.011821 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:51 crc kubenswrapper[4695]: I1126 13:46:51.012399 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:51 crc kubenswrapper[4695]: I1126 13:46:51.077506 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:51 crc kubenswrapper[4695]: I1126 13:46:51.226154 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:51 crc kubenswrapper[4695]: I1126 13:46:51.321628 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b46mq"] Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.202142 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b46mq" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="registry-server" containerID="cri-o://ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b" gracePeriod=2 Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.617389 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.645592 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-catalog-content\") pod \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.645662 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmz9g\" (UniqueName: \"kubernetes.io/projected/7c604cd9-92a3-4327-bde2-7f8706d8ca99-kube-api-access-bmz9g\") pod \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.645706 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-utilities\") pod \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\" (UID: \"7c604cd9-92a3-4327-bde2-7f8706d8ca99\") " Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.647010 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-utilities" (OuterVolumeSpecName: "utilities") pod "7c604cd9-92a3-4327-bde2-7f8706d8ca99" (UID: "7c604cd9-92a3-4327-bde2-7f8706d8ca99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.655406 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c604cd9-92a3-4327-bde2-7f8706d8ca99-kube-api-access-bmz9g" (OuterVolumeSpecName: "kube-api-access-bmz9g") pod "7c604cd9-92a3-4327-bde2-7f8706d8ca99" (UID: "7c604cd9-92a3-4327-bde2-7f8706d8ca99"). InnerVolumeSpecName "kube-api-access-bmz9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.666857 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c604cd9-92a3-4327-bde2-7f8706d8ca99" (UID: "7c604cd9-92a3-4327-bde2-7f8706d8ca99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.747987 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.748023 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c604cd9-92a3-4327-bde2-7f8706d8ca99-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:53 crc kubenswrapper[4695]: I1126 13:46:53.748036 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmz9g\" (UniqueName: \"kubernetes.io/projected/7c604cd9-92a3-4327-bde2-7f8706d8ca99-kube-api-access-bmz9g\") on node \"crc\" DevicePath \"\"" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.216705 4695 generic.go:334] "Generic (PLEG): container finished" podID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerID="ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b" exitCode=0 Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.216750 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b46mq" event={"ID":"7c604cd9-92a3-4327-bde2-7f8706d8ca99","Type":"ContainerDied","Data":"ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b"} Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.216778 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b46mq" event={"ID":"7c604cd9-92a3-4327-bde2-7f8706d8ca99","Type":"ContainerDied","Data":"6add436dc9d80d9eaac19c9e62fdba7561f8740581736e91cbca1a8b3471831f"} Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.216795 4695 scope.go:117] "RemoveContainer" containerID="ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.216846 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b46mq" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.263057 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b46mq"] Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.264534 4695 scope.go:117] "RemoveContainer" containerID="8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.274281 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b46mq"] Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.288554 4695 scope.go:117] "RemoveContainer" containerID="a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.330176 4695 scope.go:117] "RemoveContainer" containerID="ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b" Nov 26 13:46:54 crc kubenswrapper[4695]: E1126 13:46:54.331014 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b\": container with ID starting with ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b not found: ID does not exist" containerID="ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.331072 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b"} err="failed to get container status \"ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b\": rpc error: code = NotFound desc = could not find container \"ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b\": container with ID starting with ad3824e3ff644b0bfdc977f4131342fa59abf98625182474a1dfc142cfbec26b not found: ID does not exist" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.331105 4695 scope.go:117] "RemoveContainer" containerID="8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7" Nov 26 13:46:54 crc kubenswrapper[4695]: E1126 13:46:54.331945 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7\": container with ID starting with 8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7 not found: ID does not exist" containerID="8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.331998 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7"} err="failed to get container status \"8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7\": rpc error: code = NotFound desc = could not find container \"8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7\": container with ID starting with 8aafa9be04562e17c52c6c720f01ec8afdd717fbb80569be8e2a1acba75d11e7 not found: ID does not exist" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.332035 4695 scope.go:117] "RemoveContainer" containerID="a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a" Nov 26 13:46:54 crc kubenswrapper[4695]: E1126 13:46:54.332392 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a\": container with ID starting with a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a not found: ID does not exist" containerID="a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a" Nov 26 13:46:54 crc kubenswrapper[4695]: I1126 13:46:54.332425 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a"} err="failed to get container status \"a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a\": rpc error: code = NotFound desc = could not find container \"a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a\": container with ID starting with a99299788abe0324b8c6e4b2c73891633e0088ae0abea61c0050285ae2ee924a not found: ID does not exist" Nov 26 13:46:55 crc kubenswrapper[4695]: I1126 13:46:55.178029 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" path="/var/lib/kubelet/pods/7c604cd9-92a3-4327-bde2-7f8706d8ca99/volumes" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.300887 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l4rsr"] Nov 26 13:46:57 crc kubenswrapper[4695]: E1126 13:46:57.301590 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="registry-server" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.301602 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="registry-server" Nov 26 13:46:57 crc kubenswrapper[4695]: E1126 13:46:57.301620 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="extract-utilities" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.301626 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="extract-utilities" Nov 26 13:46:57 crc kubenswrapper[4695]: E1126 13:46:57.301634 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="extract-content" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.301640 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="extract-content" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.301848 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c604cd9-92a3-4327-bde2-7f8706d8ca99" containerName="registry-server" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.303495 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.318066 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxv2\" (UniqueName: \"kubernetes.io/projected/44103a47-e83a-478a-bb41-3ec12bd44a50-kube-api-access-fvxv2\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.318206 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-utilities\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.318267 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-catalog-content\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.326418 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4rsr"] Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.420016 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxv2\" (UniqueName: \"kubernetes.io/projected/44103a47-e83a-478a-bb41-3ec12bd44a50-kube-api-access-fvxv2\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.420427 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-utilities\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.420477 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-catalog-content\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.421018 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-catalog-content\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.421008 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-utilities\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.448918 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxv2\" (UniqueName: \"kubernetes.io/projected/44103a47-e83a-478a-bb41-3ec12bd44a50-kube-api-access-fvxv2\") pod \"community-operators-l4rsr\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:57 crc kubenswrapper[4695]: I1126 13:46:57.630071 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:46:58 crc kubenswrapper[4695]: I1126 13:46:58.141904 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l4rsr"] Nov 26 13:46:58 crc kubenswrapper[4695]: I1126 13:46:58.255230 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4rsr" event={"ID":"44103a47-e83a-478a-bb41-3ec12bd44a50","Type":"ContainerStarted","Data":"7a05169939ab34f7a0e065075bd4eb2c4caf87c1afd592057585458f43977169"} Nov 26 13:46:58 crc kubenswrapper[4695]: I1126 13:46:58.635766 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.103384 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-fd5l2"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.104879 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.109883 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.110025 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.124128 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fd5l2"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.165575 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59pw\" (UniqueName: \"kubernetes.io/projected/738dcd55-7a7f-4281-a6c2-a49de39a161e-kube-api-access-z59pw\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.165944 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.165992 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-scripts\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.166080 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-config-data\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.269659 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-config-data\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.269885 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59pw\" (UniqueName: \"kubernetes.io/projected/738dcd55-7a7f-4281-a6c2-a49de39a161e-kube-api-access-z59pw\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.270036 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.270176 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-scripts\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.282140 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.287569 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-scripts\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.289203 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-config-data\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.312975 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59pw\" (UniqueName: \"kubernetes.io/projected/738dcd55-7a7f-4281-a6c2-a49de39a161e-kube-api-access-z59pw\") pod \"nova-cell0-cell-mapping-fd5l2\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.342750 4695 generic.go:334] "Generic (PLEG): container finished" podID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerID="ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02" exitCode=0 Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.342796 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4rsr" event={"ID":"44103a47-e83a-478a-bb41-3ec12bd44a50","Type":"ContainerDied","Data":"ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02"} Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.406401 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.407909 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.411638 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.426970 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.458269 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.479665 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-config-data\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.479771 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.479858 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j85nx\" (UniqueName: \"kubernetes.io/projected/d92f023c-e700-4cd9-be33-985fda81f5d7-kube-api-access-j85nx\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.583008 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.584055 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j85nx\" (UniqueName: \"kubernetes.io/projected/d92f023c-e700-4cd9-be33-985fda81f5d7-kube-api-access-j85nx\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.584270 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-config-data\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.584532 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.586003 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.588150 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.596006 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.619633 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-config-data\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.639082 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j85nx\" (UniqueName: \"kubernetes.io/projected/d92f023c-e700-4cd9-be33-985fda81f5d7-kube-api-access-j85nx\") pod \"nova-scheduler-0\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.646816 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.672462 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.676416 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.678701 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.689780 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c7a964-94f7-4640-9f24-49dc0736047a-logs\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.689841 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.689869 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmr6k\" (UniqueName: \"kubernetes.io/projected/69c7a964-94f7-4640-9f24-49dc0736047a-kube-api-access-jmr6k\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.689902 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-config-data\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.696980 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.742184 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.743365 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.744696 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.747378 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.759257 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.786170 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-m9rhq"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.789689 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794374 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-config-data\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794420 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794460 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c7a964-94f7-4640-9f24-49dc0736047a-logs\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794503 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe5550e-649d-454e-95e1-af534e2462cc-logs\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794521 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794576 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794602 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62kc\" (UniqueName: \"kubernetes.io/projected/efe5550e-649d-454e-95e1-af534e2462cc-kube-api-access-c62kc\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794622 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmr6k\" (UniqueName: \"kubernetes.io/projected/69c7a964-94f7-4640-9f24-49dc0736047a-kube-api-access-jmr6k\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794650 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794681 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-config-data\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.794714 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpmw\" (UniqueName: \"kubernetes.io/projected/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-kube-api-access-gwpmw\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.796045 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c7a964-94f7-4640-9f24-49dc0736047a-logs\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.798199 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-m9rhq"] Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.807576 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.809243 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-config-data\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.822085 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmr6k\" (UniqueName: \"kubernetes.io/projected/69c7a964-94f7-4640-9f24-49dc0736047a-kube-api-access-jmr6k\") pod \"nova-metadata-0\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896625 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896672 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s28ns\" (UniqueName: \"kubernetes.io/projected/3a64e394-eef4-4ade-bb3a-d41d2326b554-kube-api-access-s28ns\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896701 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe5550e-649d-454e-95e1-af534e2462cc-logs\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896723 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896780 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c62kc\" (UniqueName: \"kubernetes.io/projected/efe5550e-649d-454e-95e1-af534e2462cc-kube-api-access-c62kc\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896811 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896836 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896861 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896881 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwpmw\" (UniqueName: \"kubernetes.io/projected/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-kube-api-access-gwpmw\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896928 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-config\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896973 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-config-data\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.896992 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.897021 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-svc\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.897123 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe5550e-649d-454e-95e1-af534e2462cc-logs\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.901736 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-config-data\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.902328 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.904233 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.907167 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.926887 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwpmw\" (UniqueName: \"kubernetes.io/projected/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-kube-api-access-gwpmw\") pod \"nova-cell1-novncproxy-0\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.925177 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62kc\" (UniqueName: \"kubernetes.io/projected/efe5550e-649d-454e-95e1-af534e2462cc-kube-api-access-c62kc\") pod \"nova-api-0\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " pod="openstack/nova-api-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.963118 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.998627 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-svc\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.998674 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.998698 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s28ns\" (UniqueName: \"kubernetes.io/projected/3a64e394-eef4-4ade-bb3a-d41d2326b554-kube-api-access-s28ns\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.998751 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.998777 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.998829 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-config\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:46:59 crc kubenswrapper[4695]: I1126 13:46:59.999642 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-config\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.000145 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-svc\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.000767 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.001938 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.002179 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.019045 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s28ns\" (UniqueName: \"kubernetes.io/projected/3a64e394-eef4-4ade-bb3a-d41d2326b554-kube-api-access-s28ns\") pod \"dnsmasq-dns-865f5d856f-m9rhq\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.045678 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.083744 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.136581 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.255768 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-fd5l2"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.349154 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.365949 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fd5l2" event={"ID":"738dcd55-7a7f-4281-a6c2-a49de39a161e","Type":"ContainerStarted","Data":"d904a5371a90b6d61eca663bf55276b55c12fcde09c857916eaa0ba7f3f5f90f"} Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.461692 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.602340 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wqfd2"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.604427 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.612362 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.612636 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.616448 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wqfd2"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.624461 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.721795 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-m9rhq"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.723867 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-config-data\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.723903 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.723956 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx7zt\" (UniqueName: \"kubernetes.io/projected/f29eebab-4cee-46a2-980a-49feebce3198-kube-api-access-bx7zt\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.723992 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-scripts\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: W1126 13:47:00.796445 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod671b4178_bcac_4fa4_8f6c_cb0c3163f3ca.slice/crio-6ae1021948bf64d2c6691890abceb772fcd0e3e5d1f498e25179c68f4d44e9fd WatchSource:0}: Error finding container 6ae1021948bf64d2c6691890abceb772fcd0e3e5d1f498e25179c68f4d44e9fd: Status 404 returned error can't find the container with id 6ae1021948bf64d2c6691890abceb772fcd0e3e5d1f498e25179c68f4d44e9fd Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.796905 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.825512 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx7zt\" (UniqueName: \"kubernetes.io/projected/f29eebab-4cee-46a2-980a-49feebce3198-kube-api-access-bx7zt\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.825852 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-scripts\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.825969 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-config-data\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.825996 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.832094 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-scripts\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.832307 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.832320 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-config-data\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.847341 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx7zt\" (UniqueName: \"kubernetes.io/projected/f29eebab-4cee-46a2-980a-49feebce3198-kube-api-access-bx7zt\") pod \"nova-cell1-conductor-db-sync-wqfd2\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:00 crc kubenswrapper[4695]: I1126 13:47:00.924198 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.386716 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe5550e-649d-454e-95e1-af534e2462cc","Type":"ContainerStarted","Data":"8a2f9b715148e6d09399206c1ee751e2c7c92a9c703b32e7020d332cefe5662d"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.392153 4695 generic.go:334] "Generic (PLEG): container finished" podID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerID="96197d190e2df690bcd2ad76d5fec7b0a40ee61b74ba272e400d5d80dc19e4c0" exitCode=0 Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.392466 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" event={"ID":"3a64e394-eef4-4ade-bb3a-d41d2326b554","Type":"ContainerDied","Data":"96197d190e2df690bcd2ad76d5fec7b0a40ee61b74ba272e400d5d80dc19e4c0"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.392612 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" event={"ID":"3a64e394-eef4-4ade-bb3a-d41d2326b554","Type":"ContainerStarted","Data":"7ed4ed2675db07044f962e68e7676681c7675880712793ff8ea9e37ba1cdef1e"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.403730 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"69c7a964-94f7-4640-9f24-49dc0736047a","Type":"ContainerStarted","Data":"f58d3faf090f545d1aa607df026023710a1a4164061078dc54f1a27a2c58c0b9"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.406514 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d92f023c-e700-4cd9-be33-985fda81f5d7","Type":"ContainerStarted","Data":"103e7f837b0db64bfaf57a020cc8094f570c0660bdf54a4cf8a04b3a32b37af1"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.408691 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca","Type":"ContainerStarted","Data":"6ae1021948bf64d2c6691890abceb772fcd0e3e5d1f498e25179c68f4d44e9fd"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.429536 4695 generic.go:334] "Generic (PLEG): container finished" podID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerID="1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e" exitCode=0 Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.429581 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4rsr" event={"ID":"44103a47-e83a-478a-bb41-3ec12bd44a50","Type":"ContainerDied","Data":"1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.433043 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fd5l2" event={"ID":"738dcd55-7a7f-4281-a6c2-a49de39a161e","Type":"ContainerStarted","Data":"4ddbf728ae9b9e1a05878ff58148c9394a762fdd3265cf2de5e9c590dbad105c"} Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.476320 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wqfd2"] Nov 26 13:47:01 crc kubenswrapper[4695]: I1126 13:47:01.502083 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-fd5l2" podStartSLOduration=2.502063026 podStartE2EDuration="2.502063026s" podCreationTimestamp="2025-11-26 13:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:01.476295066 +0000 UTC m=+1405.112120148" watchObservedRunningTime="2025-11-26 13:47:01.502063026 +0000 UTC m=+1405.137888118" Nov 26 13:47:02 crc kubenswrapper[4695]: I1126 13:47:02.443294 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" event={"ID":"3a64e394-eef4-4ade-bb3a-d41d2326b554","Type":"ContainerStarted","Data":"61ebddc8f32b92169a7656f5f728a9cb45a1aa4bb8c8a3fd96d67b76368938cc"} Nov 26 13:47:02 crc kubenswrapper[4695]: I1126 13:47:02.445025 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:02 crc kubenswrapper[4695]: I1126 13:47:02.446851 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" event={"ID":"f29eebab-4cee-46a2-980a-49feebce3198","Type":"ContainerStarted","Data":"e8634326a70c4fc05cc4c181982d6b854ae2092519e8dc60bbd2057a8a2648fb"} Nov 26 13:47:02 crc kubenswrapper[4695]: I1126 13:47:02.446875 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" event={"ID":"f29eebab-4cee-46a2-980a-49feebce3198","Type":"ContainerStarted","Data":"0f3dec66964d9d6cea9711059e0a438973a1d132b455fc1b0889b3d2daba16d2"} Nov 26 13:47:02 crc kubenswrapper[4695]: I1126 13:47:02.463502 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" podStartSLOduration=3.463485848 podStartE2EDuration="3.463485848s" podCreationTimestamp="2025-11-26 13:46:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:02.458972705 +0000 UTC m=+1406.094797797" watchObservedRunningTime="2025-11-26 13:47:02.463485848 +0000 UTC m=+1406.099310930" Nov 26 13:47:03 crc kubenswrapper[4695]: I1126 13:47:03.317433 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" podStartSLOduration=3.317413664 podStartE2EDuration="3.317413664s" podCreationTimestamp="2025-11-26 13:47:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:02.481994327 +0000 UTC m=+1406.117819409" watchObservedRunningTime="2025-11-26 13:47:03.317413664 +0000 UTC m=+1406.953238736" Nov 26 13:47:03 crc kubenswrapper[4695]: I1126 13:47:03.318272 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:47:03 crc kubenswrapper[4695]: I1126 13:47:03.326954 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.467111 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"69c7a964-94f7-4640-9f24-49dc0736047a","Type":"ContainerStarted","Data":"7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c"} Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.467739 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"69c7a964-94f7-4640-9f24-49dc0736047a","Type":"ContainerStarted","Data":"dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50"} Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.467200 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-metadata" containerID="cri-o://7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c" gracePeriod=30 Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.467157 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-log" containerID="cri-o://dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50" gracePeriod=30 Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.469574 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d92f023c-e700-4cd9-be33-985fda81f5d7","Type":"ContainerStarted","Data":"f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77"} Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.473732 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca","Type":"ContainerStarted","Data":"51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4"} Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.474092 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4" gracePeriod=30 Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.480891 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4rsr" event={"ID":"44103a47-e83a-478a-bb41-3ec12bd44a50","Type":"ContainerStarted","Data":"c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50"} Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.484241 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe5550e-649d-454e-95e1-af534e2462cc","Type":"ContainerStarted","Data":"38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880"} Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.484720 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe5550e-649d-454e-95e1-af534e2462cc","Type":"ContainerStarted","Data":"ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d"} Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.496620 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.230520376 podStartE2EDuration="5.496599044s" podCreationTimestamp="2025-11-26 13:46:59 +0000 UTC" firstStartedPulling="2025-11-26 13:47:00.468680843 +0000 UTC m=+1404.104505925" lastFinishedPulling="2025-11-26 13:47:03.734759511 +0000 UTC m=+1407.370584593" observedRunningTime="2025-11-26 13:47:04.487093351 +0000 UTC m=+1408.122918433" watchObservedRunningTime="2025-11-26 13:47:04.496599044 +0000 UTC m=+1408.132424126" Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.512811 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l4rsr" podStartSLOduration=3.12397244 podStartE2EDuration="7.512786119s" podCreationTimestamp="2025-11-26 13:46:57 +0000 UTC" firstStartedPulling="2025-11-26 13:46:59.34618857 +0000 UTC m=+1402.982013652" lastFinishedPulling="2025-11-26 13:47:03.735002249 +0000 UTC m=+1407.370827331" observedRunningTime="2025-11-26 13:47:04.510139245 +0000 UTC m=+1408.145964337" watchObservedRunningTime="2025-11-26 13:47:04.512786119 +0000 UTC m=+1408.148611201" Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.536024 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.488446803 podStartE2EDuration="5.53600999s" podCreationTimestamp="2025-11-26 13:46:59 +0000 UTC" firstStartedPulling="2025-11-26 13:47:00.673467807 +0000 UTC m=+1404.309292879" lastFinishedPulling="2025-11-26 13:47:03.721030984 +0000 UTC m=+1407.356856066" observedRunningTime="2025-11-26 13:47:04.534968546 +0000 UTC m=+1408.170793628" watchObservedRunningTime="2025-11-26 13:47:04.53600999 +0000 UTC m=+1408.171835072" Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.587798 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.679148318 podStartE2EDuration="5.587777928s" podCreationTimestamp="2025-11-26 13:46:59 +0000 UTC" firstStartedPulling="2025-11-26 13:47:00.813805168 +0000 UTC m=+1404.449630250" lastFinishedPulling="2025-11-26 13:47:03.722434778 +0000 UTC m=+1407.358259860" observedRunningTime="2025-11-26 13:47:04.563281638 +0000 UTC m=+1408.199106720" watchObservedRunningTime="2025-11-26 13:47:04.587777928 +0000 UTC m=+1408.223603010" Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.744625 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.964281 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 13:47:04 crc kubenswrapper[4695]: I1126 13:47:04.964322 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 13:47:05 crc kubenswrapper[4695]: I1126 13:47:05.084996 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:05 crc kubenswrapper[4695]: I1126 13:47:05.427765 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 26 13:47:05 crc kubenswrapper[4695]: I1126 13:47:05.477552 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.153098198 podStartE2EDuration="6.477528626s" podCreationTimestamp="2025-11-26 13:46:59 +0000 UTC" firstStartedPulling="2025-11-26 13:47:00.39642305 +0000 UTC m=+1404.032248132" lastFinishedPulling="2025-11-26 13:47:03.720853468 +0000 UTC m=+1407.356678560" observedRunningTime="2025-11-26 13:47:04.586753156 +0000 UTC m=+1408.222578239" watchObservedRunningTime="2025-11-26 13:47:05.477528626 +0000 UTC m=+1409.113353718" Nov 26 13:47:05 crc kubenswrapper[4695]: I1126 13:47:05.504614 4695 generic.go:334] "Generic (PLEG): container finished" podID="69c7a964-94f7-4640-9f24-49dc0736047a" containerID="dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50" exitCode=143 Nov 26 13:47:05 crc kubenswrapper[4695]: I1126 13:47:05.505640 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"69c7a964-94f7-4640-9f24-49dc0736047a","Type":"ContainerDied","Data":"dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50"} Nov 26 13:47:07 crc kubenswrapper[4695]: I1126 13:47:07.630542 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:47:07 crc kubenswrapper[4695]: I1126 13:47:07.631498 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:47:07 crc kubenswrapper[4695]: I1126 13:47:07.680180 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:47:08 crc kubenswrapper[4695]: I1126 13:47:08.529656 4695 generic.go:334] "Generic (PLEG): container finished" podID="738dcd55-7a7f-4281-a6c2-a49de39a161e" containerID="4ddbf728ae9b9e1a05878ff58148c9394a762fdd3265cf2de5e9c590dbad105c" exitCode=0 Nov 26 13:47:08 crc kubenswrapper[4695]: I1126 13:47:08.529744 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fd5l2" event={"ID":"738dcd55-7a7f-4281-a6c2-a49de39a161e","Type":"ContainerDied","Data":"4ddbf728ae9b9e1a05878ff58148c9394a762fdd3265cf2de5e9c590dbad105c"} Nov 26 13:47:08 crc kubenswrapper[4695]: I1126 13:47:08.983640 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:47:08 crc kubenswrapper[4695]: I1126 13:47:08.983871 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="814d5f70-9acd-4a44-9ba1-5f6b95933a80" containerName="kube-state-metrics" containerID="cri-o://45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726" gracePeriod=30 Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.474094 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.540622 4695 generic.go:334] "Generic (PLEG): container finished" podID="814d5f70-9acd-4a44-9ba1-5f6b95933a80" containerID="45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726" exitCode=2 Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.540836 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.541528 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"814d5f70-9acd-4a44-9ba1-5f6b95933a80","Type":"ContainerDied","Data":"45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726"} Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.541561 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"814d5f70-9acd-4a44-9ba1-5f6b95933a80","Type":"ContainerDied","Data":"bbef412344895c2cf16fff878ac007c6ae051794512a07212a7d497d1a38b8aa"} Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.541582 4695 scope.go:117] "RemoveContainer" containerID="45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.601582 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc7db\" (UniqueName: \"kubernetes.io/projected/814d5f70-9acd-4a44-9ba1-5f6b95933a80-kube-api-access-xc7db\") pod \"814d5f70-9acd-4a44-9ba1-5f6b95933a80\" (UID: \"814d5f70-9acd-4a44-9ba1-5f6b95933a80\") " Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.613441 4695 scope.go:117] "RemoveContainer" containerID="45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726" Nov 26 13:47:09 crc kubenswrapper[4695]: E1126 13:47:09.613957 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726\": container with ID starting with 45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726 not found: ID does not exist" containerID="45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.613993 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726"} err="failed to get container status \"45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726\": rpc error: code = NotFound desc = could not find container \"45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726\": container with ID starting with 45dc3c5dc83b5fa1822fa22787b5c228eb0dff6d8d05a7c06377b7aaf2427726 not found: ID does not exist" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.620789 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814d5f70-9acd-4a44-9ba1-5f6b95933a80-kube-api-access-xc7db" (OuterVolumeSpecName: "kube-api-access-xc7db") pod "814d5f70-9acd-4a44-9ba1-5f6b95933a80" (UID: "814d5f70-9acd-4a44-9ba1-5f6b95933a80"). InnerVolumeSpecName "kube-api-access-xc7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.643925 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.696574 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4rsr"] Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.709742 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc7db\" (UniqueName: \"kubernetes.io/projected/814d5f70-9acd-4a44-9ba1-5f6b95933a80-kube-api-access-xc7db\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.744276 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.773279 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.881236 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.898400 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.913022 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.913147 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z59pw\" (UniqueName: \"kubernetes.io/projected/738dcd55-7a7f-4281-a6c2-a49de39a161e-kube-api-access-z59pw\") pod \"738dcd55-7a7f-4281-a6c2-a49de39a161e\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.913187 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-scripts\") pod \"738dcd55-7a7f-4281-a6c2-a49de39a161e\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.913284 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-combined-ca-bundle\") pod \"738dcd55-7a7f-4281-a6c2-a49de39a161e\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.913308 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-config-data\") pod \"738dcd55-7a7f-4281-a6c2-a49de39a161e\" (UID: \"738dcd55-7a7f-4281-a6c2-a49de39a161e\") " Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.917535 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-scripts" (OuterVolumeSpecName: "scripts") pod "738dcd55-7a7f-4281-a6c2-a49de39a161e" (UID: "738dcd55-7a7f-4281-a6c2-a49de39a161e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.917917 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/738dcd55-7a7f-4281-a6c2-a49de39a161e-kube-api-access-z59pw" (OuterVolumeSpecName: "kube-api-access-z59pw") pod "738dcd55-7a7f-4281-a6c2-a49de39a161e" (UID: "738dcd55-7a7f-4281-a6c2-a49de39a161e"). InnerVolumeSpecName "kube-api-access-z59pw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.923165 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:47:09 crc kubenswrapper[4695]: E1126 13:47:09.923924 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="738dcd55-7a7f-4281-a6c2-a49de39a161e" containerName="nova-manage" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.923942 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="738dcd55-7a7f-4281-a6c2-a49de39a161e" containerName="nova-manage" Nov 26 13:47:09 crc kubenswrapper[4695]: E1126 13:47:09.923965 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814d5f70-9acd-4a44-9ba1-5f6b95933a80" containerName="kube-state-metrics" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.923976 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="814d5f70-9acd-4a44-9ba1-5f6b95933a80" containerName="kube-state-metrics" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.924175 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="738dcd55-7a7f-4281-a6c2-a49de39a161e" containerName="nova-manage" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.924191 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="814d5f70-9acd-4a44-9ba1-5f6b95933a80" containerName="kube-state-metrics" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.924796 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.938917 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.939009 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.945569 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-config-data" (OuterVolumeSpecName: "config-data") pod "738dcd55-7a7f-4281-a6c2-a49de39a161e" (UID: "738dcd55-7a7f-4281-a6c2-a49de39a161e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.950779 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:47:09 crc kubenswrapper[4695]: I1126 13:47:09.968866 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "738dcd55-7a7f-4281-a6c2-a49de39a161e" (UID: "738dcd55-7a7f-4281-a6c2-a49de39a161e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015366 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gppqp\" (UniqueName: \"kubernetes.io/projected/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-api-access-gppqp\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015437 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015494 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015544 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015588 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015599 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015608 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z59pw\" (UniqueName: \"kubernetes.io/projected/738dcd55-7a7f-4281-a6c2-a49de39a161e-kube-api-access-z59pw\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.015619 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/738dcd55-7a7f-4281-a6c2-a49de39a161e-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.046804 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.046874 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.116654 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.116752 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.116851 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gppqp\" (UniqueName: \"kubernetes.io/projected/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-api-access-gppqp\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.116902 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.120772 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.121649 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.126159 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.138900 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.142149 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gppqp\" (UniqueName: \"kubernetes.io/projected/8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f-kube-api-access-gppqp\") pod \"kube-state-metrics-0\" (UID: \"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f\") " pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.208093 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-t9qc9"] Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.208376 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" podUID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerName="dnsmasq-dns" containerID="cri-o://309eb20150d86724473e65a0064a99e3a991edf6b38d888239c7d5a8476d0657" gracePeriod=10 Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.251998 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.578721 4695 generic.go:334] "Generic (PLEG): container finished" podID="f29eebab-4cee-46a2-980a-49feebce3198" containerID="e8634326a70c4fc05cc4c181982d6b854ae2092519e8dc60bbd2057a8a2648fb" exitCode=0 Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.579092 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" event={"ID":"f29eebab-4cee-46a2-980a-49feebce3198","Type":"ContainerDied","Data":"e8634326a70c4fc05cc4c181982d6b854ae2092519e8dc60bbd2057a8a2648fb"} Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.604687 4695 generic.go:334] "Generic (PLEG): container finished" podID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerID="309eb20150d86724473e65a0064a99e3a991edf6b38d888239c7d5a8476d0657" exitCode=0 Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.604729 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" event={"ID":"335e87e5-67d2-44c6-9f2d-ac0424b243c2","Type":"ContainerDied","Data":"309eb20150d86724473e65a0064a99e3a991edf6b38d888239c7d5a8476d0657"} Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.619575 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-fd5l2" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.619891 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-fd5l2" event={"ID":"738dcd55-7a7f-4281-a6c2-a49de39a161e","Type":"ContainerDied","Data":"d904a5371a90b6d61eca663bf55276b55c12fcde09c857916eaa0ba7f3f5f90f"} Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.620405 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d904a5371a90b6d61eca663bf55276b55c12fcde09c857916eaa0ba7f3f5f90f" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.725094 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.776257 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.776731 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-log" containerID="cri-o://ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d" gracePeriod=30 Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.777934 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-api" containerID="cri-o://38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880" gracePeriod=30 Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.793170 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.793176 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.189:8774/\": EOF" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.795619 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.836332 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.903543 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.941367 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-nb\") pod \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.941470 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g29zv\" (UniqueName: \"kubernetes.io/projected/335e87e5-67d2-44c6-9f2d-ac0424b243c2-kube-api-access-g29zv\") pod \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.941668 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-swift-storage-0\") pod \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.941730 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-sb\") pod \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.941761 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-config\") pod \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.941795 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-svc\") pod \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\" (UID: \"335e87e5-67d2-44c6-9f2d-ac0424b243c2\") " Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.946829 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/335e87e5-67d2-44c6-9f2d-ac0424b243c2-kube-api-access-g29zv" (OuterVolumeSpecName: "kube-api-access-g29zv") pod "335e87e5-67d2-44c6-9f2d-ac0424b243c2" (UID: "335e87e5-67d2-44c6-9f2d-ac0424b243c2"). InnerVolumeSpecName "kube-api-access-g29zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:10 crc kubenswrapper[4695]: I1126 13:47:10.998434 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "335e87e5-67d2-44c6-9f2d-ac0424b243c2" (UID: "335e87e5-67d2-44c6-9f2d-ac0424b243c2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:10.999837 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "335e87e5-67d2-44c6-9f2d-ac0424b243c2" (UID: "335e87e5-67d2-44c6-9f2d-ac0424b243c2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.000669 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "335e87e5-67d2-44c6-9f2d-ac0424b243c2" (UID: "335e87e5-67d2-44c6-9f2d-ac0424b243c2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.006204 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "335e87e5-67d2-44c6-9f2d-ac0424b243c2" (UID: "335e87e5-67d2-44c6-9f2d-ac0424b243c2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.010971 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-config" (OuterVolumeSpecName: "config") pod "335e87e5-67d2-44c6-9f2d-ac0424b243c2" (UID: "335e87e5-67d2-44c6-9f2d-ac0424b243c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.044130 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.044171 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.044184 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.044192 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.044203 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/335e87e5-67d2-44c6-9f2d-ac0424b243c2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.044213 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g29zv\" (UniqueName: \"kubernetes.io/projected/335e87e5-67d2-44c6-9f2d-ac0424b243c2-kube-api-access-g29zv\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.174167 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="814d5f70-9acd-4a44-9ba1-5f6b95933a80" path="/var/lib/kubelet/pods/814d5f70-9acd-4a44-9ba1-5f6b95933a80/volumes" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.357302 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.358092 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-central-agent" containerID="cri-o://f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221" gracePeriod=30 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.358136 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="sg-core" containerID="cri-o://1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281" gracePeriod=30 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.358204 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-notification-agent" containerID="cri-o://28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec" gracePeriod=30 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.358235 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="proxy-httpd" containerID="cri-o://acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f" gracePeriod=30 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.657550 4695 generic.go:334] "Generic (PLEG): container finished" podID="efe5550e-649d-454e-95e1-af534e2462cc" containerID="ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d" exitCode=143 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.657656 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe5550e-649d-454e-95e1-af534e2462cc","Type":"ContainerDied","Data":"ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d"} Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.673699 4695 generic.go:334] "Generic (PLEG): container finished" podID="416de75f-33b6-4864-9d33-799fb2413609" containerID="acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f" exitCode=0 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.673744 4695 generic.go:334] "Generic (PLEG): container finished" podID="416de75f-33b6-4864-9d33-799fb2413609" containerID="1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281" exitCode=2 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.673764 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerDied","Data":"acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f"} Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.673834 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerDied","Data":"1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281"} Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.686234 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" event={"ID":"335e87e5-67d2-44c6-9f2d-ac0424b243c2","Type":"ContainerDied","Data":"756cea872baccfd50870df17816061998ae12225874ba772a09f2903aa32535f"} Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.686314 4695 scope.go:117] "RemoveContainer" containerID="309eb20150d86724473e65a0064a99e3a991edf6b38d888239c7d5a8476d0657" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.686262 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-t9qc9" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.688777 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l4rsr" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="registry-server" containerID="cri-o://c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50" gracePeriod=2 Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.688775 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f","Type":"ContainerStarted","Data":"0df489c732ea3fee5f29956854814fa3260a1467cdee1545b94e54b2bf68dfe8"} Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.688919 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f","Type":"ContainerStarted","Data":"58af4933d27b6a123c303510ab6f71f289bfafe6960a26de8b12bcfb57c64ce0"} Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.689513 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.721355 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-t9qc9"] Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.733553 4695 scope.go:117] "RemoveContainer" containerID="2406831eb658b9a8acbb2140c8669084e9482778c78d63022b1a10130d512fbf" Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.735020 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-t9qc9"] Nov 26 13:47:11 crc kubenswrapper[4695]: I1126 13:47:11.746371 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.421731811 podStartE2EDuration="2.746332763s" podCreationTimestamp="2025-11-26 13:47:09 +0000 UTC" firstStartedPulling="2025-11-26 13:47:10.904242054 +0000 UTC m=+1414.540067136" lastFinishedPulling="2025-11-26 13:47:11.228843006 +0000 UTC m=+1414.864668088" observedRunningTime="2025-11-26 13:47:11.741647684 +0000 UTC m=+1415.377472766" watchObservedRunningTime="2025-11-26 13:47:11.746332763 +0000 UTC m=+1415.382157835" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.058683 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.170410 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-combined-ca-bundle\") pod \"f29eebab-4cee-46a2-980a-49feebce3198\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.170541 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-scripts\") pod \"f29eebab-4cee-46a2-980a-49feebce3198\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.170635 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx7zt\" (UniqueName: \"kubernetes.io/projected/f29eebab-4cee-46a2-980a-49feebce3198-kube-api-access-bx7zt\") pod \"f29eebab-4cee-46a2-980a-49feebce3198\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.170654 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-config-data\") pod \"f29eebab-4cee-46a2-980a-49feebce3198\" (UID: \"f29eebab-4cee-46a2-980a-49feebce3198\") " Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.175171 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-scripts" (OuterVolumeSpecName: "scripts") pod "f29eebab-4cee-46a2-980a-49feebce3198" (UID: "f29eebab-4cee-46a2-980a-49feebce3198"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.193534 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f29eebab-4cee-46a2-980a-49feebce3198-kube-api-access-bx7zt" (OuterVolumeSpecName: "kube-api-access-bx7zt") pod "f29eebab-4cee-46a2-980a-49feebce3198" (UID: "f29eebab-4cee-46a2-980a-49feebce3198"). InnerVolumeSpecName "kube-api-access-bx7zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.201595 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-config-data" (OuterVolumeSpecName: "config-data") pod "f29eebab-4cee-46a2-980a-49feebce3198" (UID: "f29eebab-4cee-46a2-980a-49feebce3198"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.221706 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f29eebab-4cee-46a2-980a-49feebce3198" (UID: "f29eebab-4cee-46a2-980a-49feebce3198"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.273783 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.274825 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx7zt\" (UniqueName: \"kubernetes.io/projected/f29eebab-4cee-46a2-980a-49feebce3198-kube-api-access-bx7zt\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.274935 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.275004 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f29eebab-4cee-46a2-980a-49feebce3198-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.310416 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.479464 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-catalog-content\") pod \"44103a47-e83a-478a-bb41-3ec12bd44a50\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.479757 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvxv2\" (UniqueName: \"kubernetes.io/projected/44103a47-e83a-478a-bb41-3ec12bd44a50-kube-api-access-fvxv2\") pod \"44103a47-e83a-478a-bb41-3ec12bd44a50\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.479914 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-utilities\") pod \"44103a47-e83a-478a-bb41-3ec12bd44a50\" (UID: \"44103a47-e83a-478a-bb41-3ec12bd44a50\") " Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.480602 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-utilities" (OuterVolumeSpecName: "utilities") pod "44103a47-e83a-478a-bb41-3ec12bd44a50" (UID: "44103a47-e83a-478a-bb41-3ec12bd44a50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.485856 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44103a47-e83a-478a-bb41-3ec12bd44a50-kube-api-access-fvxv2" (OuterVolumeSpecName: "kube-api-access-fvxv2") pod "44103a47-e83a-478a-bb41-3ec12bd44a50" (UID: "44103a47-e83a-478a-bb41-3ec12bd44a50"). InnerVolumeSpecName "kube-api-access-fvxv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.534929 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44103a47-e83a-478a-bb41-3ec12bd44a50" (UID: "44103a47-e83a-478a-bb41-3ec12bd44a50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.581675 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvxv2\" (UniqueName: \"kubernetes.io/projected/44103a47-e83a-478a-bb41-3ec12bd44a50-kube-api-access-fvxv2\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.581724 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.581738 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44103a47-e83a-478a-bb41-3ec12bd44a50-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686014 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.686476 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerName="dnsmasq-dns" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686495 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerName="dnsmasq-dns" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.686514 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="extract-utilities" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686522 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="extract-utilities" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.686552 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="extract-content" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686560 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="extract-content" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.686575 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerName="init" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686583 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerName="init" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.686600 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f29eebab-4cee-46a2-980a-49feebce3198" containerName="nova-cell1-conductor-db-sync" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686609 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f29eebab-4cee-46a2-980a-49feebce3198" containerName="nova-cell1-conductor-db-sync" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.686618 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="registry-server" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686625 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="registry-server" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686840 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f29eebab-4cee-46a2-980a-49feebce3198" containerName="nova-cell1-conductor-db-sync" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686859 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerName="registry-server" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.686891 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" containerName="dnsmasq-dns" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.687656 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.699549 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.712084 4695 generic.go:334] "Generic (PLEG): container finished" podID="44103a47-e83a-478a-bb41-3ec12bd44a50" containerID="c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50" exitCode=0 Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.712440 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l4rsr" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.712826 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4rsr" event={"ID":"44103a47-e83a-478a-bb41-3ec12bd44a50","Type":"ContainerDied","Data":"c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50"} Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.712896 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l4rsr" event={"ID":"44103a47-e83a-478a-bb41-3ec12bd44a50","Type":"ContainerDied","Data":"7a05169939ab34f7a0e065075bd4eb2c4caf87c1afd592057585458f43977169"} Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.712917 4695 scope.go:117] "RemoveContainer" containerID="c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.715499 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.714801 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-wqfd2" event={"ID":"f29eebab-4cee-46a2-980a-49feebce3198","Type":"ContainerDied","Data":"0f3dec66964d9d6cea9711059e0a438973a1d132b455fc1b0889b3d2daba16d2"} Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.716141 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0f3dec66964d9d6cea9711059e0a438973a1d132b455fc1b0889b3d2daba16d2" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.718579 4695 generic.go:334] "Generic (PLEG): container finished" podID="416de75f-33b6-4864-9d33-799fb2413609" containerID="f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221" exitCode=0 Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.718671 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerDied","Data":"f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221"} Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.719080 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d92f023c-e700-4cd9-be33-985fda81f5d7" containerName="nova-scheduler-scheduler" containerID="cri-o://f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77" gracePeriod=30 Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.759602 4695 scope.go:117] "RemoveContainer" containerID="1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.768423 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l4rsr"] Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.777020 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l4rsr"] Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.793253 4695 scope.go:117] "RemoveContainer" containerID="ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.815607 4695 scope.go:117] "RemoveContainer" containerID="c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.815910 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50\": container with ID starting with c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50 not found: ID does not exist" containerID="c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.815944 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50"} err="failed to get container status \"c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50\": rpc error: code = NotFound desc = could not find container \"c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50\": container with ID starting with c061b4523928fd976db93dba7540ededeccf80c88ebc050c25a93cabdc0cfe50 not found: ID does not exist" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.815968 4695 scope.go:117] "RemoveContainer" containerID="1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.816260 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e\": container with ID starting with 1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e not found: ID does not exist" containerID="1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.816300 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e"} err="failed to get container status \"1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e\": rpc error: code = NotFound desc = could not find container \"1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e\": container with ID starting with 1f3abc845a24a140f472f02055071c6eaf0dcf32261435da300d6d693320553e not found: ID does not exist" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.816315 4695 scope.go:117] "RemoveContainer" containerID="ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02" Nov 26 13:47:12 crc kubenswrapper[4695]: E1126 13:47:12.816574 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02\": container with ID starting with ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02 not found: ID does not exist" containerID="ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.816612 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02"} err="failed to get container status \"ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02\": rpc error: code = NotFound desc = could not find container \"ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02\": container with ID starting with ebbe26bc06d229eef62193178d58e5195165c21fa4964637bc9a548f67122d02 not found: ID does not exist" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.886109 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptw8t\" (UniqueName: \"kubernetes.io/projected/f8537527-a9d6-41b1-b7ad-a281ee216c45-kube-api-access-ptw8t\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.886175 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8537527-a9d6-41b1-b7ad-a281ee216c45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.886486 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8537527-a9d6-41b1-b7ad-a281ee216c45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.988315 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8537527-a9d6-41b1-b7ad-a281ee216c45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.988462 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8537527-a9d6-41b1-b7ad-a281ee216c45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.988527 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptw8t\" (UniqueName: \"kubernetes.io/projected/f8537527-a9d6-41b1-b7ad-a281ee216c45-kube-api-access-ptw8t\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.992109 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8537527-a9d6-41b1-b7ad-a281ee216c45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:12 crc kubenswrapper[4695]: I1126 13:47:12.993885 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8537527-a9d6-41b1-b7ad-a281ee216c45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.004614 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptw8t\" (UniqueName: \"kubernetes.io/projected/f8537527-a9d6-41b1-b7ad-a281ee216c45-kube-api-access-ptw8t\") pod \"nova-cell1-conductor-0\" (UID: \"f8537527-a9d6-41b1-b7ad-a281ee216c45\") " pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.006729 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.184052 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="335e87e5-67d2-44c6-9f2d-ac0424b243c2" path="/var/lib/kubelet/pods/335e87e5-67d2-44c6-9f2d-ac0424b243c2/volumes" Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.185167 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44103a47-e83a-478a-bb41-3ec12bd44a50" path="/var/lib/kubelet/pods/44103a47-e83a-478a-bb41-3ec12bd44a50/volumes" Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.473710 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.732462 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f8537527-a9d6-41b1-b7ad-a281ee216c45","Type":"ContainerStarted","Data":"6e27175bb954a50d66aba4b525cc392ab258212e58587c7449959d1be5508da0"} Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.732502 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"f8537527-a9d6-41b1-b7ad-a281ee216c45","Type":"ContainerStarted","Data":"3167b933829cbce2f8a794417ba08dd52815e2666b59bfe512ae27f9bb5c9032"} Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.732552 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:13 crc kubenswrapper[4695]: I1126 13:47:13.747901 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=1.747882223 podStartE2EDuration="1.747882223s" podCreationTimestamp="2025-11-26 13:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:13.747298745 +0000 UTC m=+1417.383123827" watchObservedRunningTime="2025-11-26 13:47:13.747882223 +0000 UTC m=+1417.383707305" Nov 26 13:47:14 crc kubenswrapper[4695]: E1126 13:47:14.746669 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 13:47:14 crc kubenswrapper[4695]: E1126 13:47:14.748787 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 13:47:14 crc kubenswrapper[4695]: E1126 13:47:14.750849 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 13:47:14 crc kubenswrapper[4695]: E1126 13:47:14.750893 4695 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d92f023c-e700-4cd9-be33-985fda81f5d7" containerName="nova-scheduler-scheduler" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.365936 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.564955 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-config-data\") pod \"416de75f-33b6-4864-9d33-799fb2413609\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.565059 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5j69\" (UniqueName: \"kubernetes.io/projected/416de75f-33b6-4864-9d33-799fb2413609-kube-api-access-r5j69\") pod \"416de75f-33b6-4864-9d33-799fb2413609\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.565145 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-run-httpd\") pod \"416de75f-33b6-4864-9d33-799fb2413609\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.565217 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-combined-ca-bundle\") pod \"416de75f-33b6-4864-9d33-799fb2413609\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.565270 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-sg-core-conf-yaml\") pod \"416de75f-33b6-4864-9d33-799fb2413609\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.565402 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-scripts\") pod \"416de75f-33b6-4864-9d33-799fb2413609\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.565475 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-log-httpd\") pod \"416de75f-33b6-4864-9d33-799fb2413609\" (UID: \"416de75f-33b6-4864-9d33-799fb2413609\") " Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.565695 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "416de75f-33b6-4864-9d33-799fb2413609" (UID: "416de75f-33b6-4864-9d33-799fb2413609"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.566120 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "416de75f-33b6-4864-9d33-799fb2413609" (UID: "416de75f-33b6-4864-9d33-799fb2413609"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.566219 4695 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.574019 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-scripts" (OuterVolumeSpecName: "scripts") pod "416de75f-33b6-4864-9d33-799fb2413609" (UID: "416de75f-33b6-4864-9d33-799fb2413609"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.578868 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416de75f-33b6-4864-9d33-799fb2413609-kube-api-access-r5j69" (OuterVolumeSpecName: "kube-api-access-r5j69") pod "416de75f-33b6-4864-9d33-799fb2413609" (UID: "416de75f-33b6-4864-9d33-799fb2413609"). InnerVolumeSpecName "kube-api-access-r5j69". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.603329 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "416de75f-33b6-4864-9d33-799fb2413609" (UID: "416de75f-33b6-4864-9d33-799fb2413609"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.662977 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "416de75f-33b6-4864-9d33-799fb2413609" (UID: "416de75f-33b6-4864-9d33-799fb2413609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.667403 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.667431 4695 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/416de75f-33b6-4864-9d33-799fb2413609-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.667442 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5j69\" (UniqueName: \"kubernetes.io/projected/416de75f-33b6-4864-9d33-799fb2413609-kube-api-access-r5j69\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.667453 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.667461 4695 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.682392 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-config-data" (OuterVolumeSpecName: "config-data") pod "416de75f-33b6-4864-9d33-799fb2413609" (UID: "416de75f-33b6-4864-9d33-799fb2413609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.762545 4695 generic.go:334] "Generic (PLEG): container finished" podID="d92f023c-e700-4cd9-be33-985fda81f5d7" containerID="f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77" exitCode=0 Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.762648 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d92f023c-e700-4cd9-be33-985fda81f5d7","Type":"ContainerDied","Data":"f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77"} Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.766220 4695 generic.go:334] "Generic (PLEG): container finished" podID="416de75f-33b6-4864-9d33-799fb2413609" containerID="28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec" exitCode=0 Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.766257 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerDied","Data":"28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec"} Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.766282 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"416de75f-33b6-4864-9d33-799fb2413609","Type":"ContainerDied","Data":"ddbb54b50158c9204c8638a53398d4210d6e0901a5b941aae099481852086948"} Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.766304 4695 scope.go:117] "RemoveContainer" containerID="acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.766483 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.768824 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416de75f-33b6-4864-9d33-799fb2413609-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.801813 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.808226 4695 scope.go:117] "RemoveContainer" containerID="1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.824731 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.836400 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.836904 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="proxy-httpd" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.836936 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="proxy-httpd" Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.836955 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-notification-agent" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.836964 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-notification-agent" Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.836992 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-central-agent" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.837001 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-central-agent" Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.837015 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="sg-core" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.837021 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="sg-core" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.837241 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-central-agent" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.837261 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="sg-core" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.837273 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="proxy-httpd" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.837289 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="416de75f-33b6-4864-9d33-799fb2413609" containerName="ceilometer-notification-agent" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.839150 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.844915 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.845194 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.845828 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.851810 4695 scope.go:117] "RemoveContainer" containerID="28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.854742 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.872225 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-log-httpd\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.872281 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.872312 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.874430 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.874504 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-run-httpd\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.874520 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6mzk\" (UniqueName: \"kubernetes.io/projected/04061804-12eb-4a78-b468-3008a2054b50-kube-api-access-f6mzk\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.874622 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-scripts\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.874662 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-config-data\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.887759 4695 scope.go:117] "RemoveContainer" containerID="f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.912287 4695 scope.go:117] "RemoveContainer" containerID="acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f" Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.913223 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f\": container with ID starting with acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f not found: ID does not exist" containerID="acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.913279 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f"} err="failed to get container status \"acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f\": rpc error: code = NotFound desc = could not find container \"acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f\": container with ID starting with acff1c66eb4aa432c8a9a04d175f2c3beec45d9d2abe99de568b52a96ed3f85f not found: ID does not exist" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.913306 4695 scope.go:117] "RemoveContainer" containerID="1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281" Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.913669 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281\": container with ID starting with 1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281 not found: ID does not exist" containerID="1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.913703 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281"} err="failed to get container status \"1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281\": rpc error: code = NotFound desc = could not find container \"1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281\": container with ID starting with 1be827190cb22d312f7f37aae71ecd85669cbe7c08b5a91ffd484e242ebd2281 not found: ID does not exist" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.913722 4695 scope.go:117] "RemoveContainer" containerID="28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec" Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.913959 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec\": container with ID starting with 28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec not found: ID does not exist" containerID="28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.913982 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec"} err="failed to get container status \"28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec\": rpc error: code = NotFound desc = could not find container \"28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec\": container with ID starting with 28e09cd6ceee047fc5b6fce6c0b3e025dc8fcfe446efe38512461494581ce9ec not found: ID does not exist" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.913995 4695 scope.go:117] "RemoveContainer" containerID="f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221" Nov 26 13:47:15 crc kubenswrapper[4695]: E1126 13:47:15.914230 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221\": container with ID starting with f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221 not found: ID does not exist" containerID="f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.914250 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221"} err="failed to get container status \"f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221\": rpc error: code = NotFound desc = could not find container \"f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221\": container with ID starting with f3c0ee3bbf9305f74c1814f4a23657ce05f978e0933582347dafb9ee6100e221 not found: ID does not exist" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.950084 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.978683 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-scripts\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.978778 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-config-data\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.978925 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-log-httpd\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.978983 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.979012 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.979087 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.979156 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-run-httpd\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.979179 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6mzk\" (UniqueName: \"kubernetes.io/projected/04061804-12eb-4a78-b468-3008a2054b50-kube-api-access-f6mzk\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.979601 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-log-httpd\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.983487 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.984642 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-run-httpd\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.990261 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.990461 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-config-data\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:15 crc kubenswrapper[4695]: I1126 13:47:15.995495 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-scripts\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.006876 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6mzk\" (UniqueName: \"kubernetes.io/projected/04061804-12eb-4a78-b468-3008a2054b50-kube-api-access-f6mzk\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.009670 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " pod="openstack/ceilometer-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.080130 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-config-data\") pod \"d92f023c-e700-4cd9-be33-985fda81f5d7\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.080370 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j85nx\" (UniqueName: \"kubernetes.io/projected/d92f023c-e700-4cd9-be33-985fda81f5d7-kube-api-access-j85nx\") pod \"d92f023c-e700-4cd9-be33-985fda81f5d7\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.080459 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-combined-ca-bundle\") pod \"d92f023c-e700-4cd9-be33-985fda81f5d7\" (UID: \"d92f023c-e700-4cd9-be33-985fda81f5d7\") " Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.084203 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d92f023c-e700-4cd9-be33-985fda81f5d7-kube-api-access-j85nx" (OuterVolumeSpecName: "kube-api-access-j85nx") pod "d92f023c-e700-4cd9-be33-985fda81f5d7" (UID: "d92f023c-e700-4cd9-be33-985fda81f5d7"). InnerVolumeSpecName "kube-api-access-j85nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.107049 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-config-data" (OuterVolumeSpecName: "config-data") pod "d92f023c-e700-4cd9-be33-985fda81f5d7" (UID: "d92f023c-e700-4cd9-be33-985fda81f5d7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.109489 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d92f023c-e700-4cd9-be33-985fda81f5d7" (UID: "d92f023c-e700-4cd9-be33-985fda81f5d7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.168450 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.182600 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j85nx\" (UniqueName: \"kubernetes.io/projected/d92f023c-e700-4cd9-be33-985fda81f5d7-kube-api-access-j85nx\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.182632 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.182642 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d92f023c-e700-4cd9-be33-985fda81f5d7-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.608995 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:16 crc kubenswrapper[4695]: W1126 13:47:16.614577 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04061804_12eb_4a78_b468_3008a2054b50.slice/crio-64d6b67fdb86fbe2609dae4b358332d016bca9901f1867010c32335c07ace272 WatchSource:0}: Error finding container 64d6b67fdb86fbe2609dae4b358332d016bca9901f1867010c32335c07ace272: Status 404 returned error can't find the container with id 64d6b67fdb86fbe2609dae4b358332d016bca9901f1867010c32335c07ace272 Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.775876 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerStarted","Data":"64d6b67fdb86fbe2609dae4b358332d016bca9901f1867010c32335c07ace272"} Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.777988 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d92f023c-e700-4cd9-be33-985fda81f5d7","Type":"ContainerDied","Data":"103e7f837b0db64bfaf57a020cc8094f570c0660bdf54a4cf8a04b3a32b37af1"} Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.778025 4695 scope.go:117] "RemoveContainer" containerID="f8e6d42c127b586faeb54b84cc2991ec9640ddbdf36e3106fa68286086c77c77" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.778153 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.811732 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.819242 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.831139 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:16 crc kubenswrapper[4695]: E1126 13:47:16.831573 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d92f023c-e700-4cd9-be33-985fda81f5d7" containerName="nova-scheduler-scheduler" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.831618 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d92f023c-e700-4cd9-be33-985fda81f5d7" containerName="nova-scheduler-scheduler" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.831840 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d92f023c-e700-4cd9-be33-985fda81f5d7" containerName="nova-scheduler-scheduler" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.832477 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.834512 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.839571 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.997071 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.997163 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7wp4\" (UniqueName: \"kubernetes.io/projected/f89f6929-5688-4695-a589-2103e1d61cac-kube-api-access-d7wp4\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:16 crc kubenswrapper[4695]: I1126 13:47:16.997428 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-config-data\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.099092 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7wp4\" (UniqueName: \"kubernetes.io/projected/f89f6929-5688-4695-a589-2103e1d61cac-kube-api-access-d7wp4\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.099192 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-config-data\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.099288 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.106108 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-config-data\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.106228 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.121796 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7wp4\" (UniqueName: \"kubernetes.io/projected/f89f6929-5688-4695-a589-2103e1d61cac-kube-api-access-d7wp4\") pod \"nova-scheduler-0\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.159782 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.198412 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416de75f-33b6-4864-9d33-799fb2413609" path="/var/lib/kubelet/pods/416de75f-33b6-4864-9d33-799fb2413609/volumes" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.199276 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d92f023c-e700-4cd9-be33-985fda81f5d7" path="/var/lib/kubelet/pods/d92f023c-e700-4cd9-be33-985fda81f5d7/volumes" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.576181 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.589310 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.710055 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-config-data\") pod \"efe5550e-649d-454e-95e1-af534e2462cc\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.710340 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-combined-ca-bundle\") pod \"efe5550e-649d-454e-95e1-af534e2462cc\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.710399 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c62kc\" (UniqueName: \"kubernetes.io/projected/efe5550e-649d-454e-95e1-af534e2462cc-kube-api-access-c62kc\") pod \"efe5550e-649d-454e-95e1-af534e2462cc\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.710479 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe5550e-649d-454e-95e1-af534e2462cc-logs\") pod \"efe5550e-649d-454e-95e1-af534e2462cc\" (UID: \"efe5550e-649d-454e-95e1-af534e2462cc\") " Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.711297 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe5550e-649d-454e-95e1-af534e2462cc-logs" (OuterVolumeSpecName: "logs") pod "efe5550e-649d-454e-95e1-af534e2462cc" (UID: "efe5550e-649d-454e-95e1-af534e2462cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.713797 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe5550e-649d-454e-95e1-af534e2462cc-kube-api-access-c62kc" (OuterVolumeSpecName: "kube-api-access-c62kc") pod "efe5550e-649d-454e-95e1-af534e2462cc" (UID: "efe5550e-649d-454e-95e1-af534e2462cc"). InnerVolumeSpecName "kube-api-access-c62kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.741066 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-config-data" (OuterVolumeSpecName: "config-data") pod "efe5550e-649d-454e-95e1-af534e2462cc" (UID: "efe5550e-649d-454e-95e1-af534e2462cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.743319 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efe5550e-649d-454e-95e1-af534e2462cc" (UID: "efe5550e-649d-454e-95e1-af534e2462cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.793504 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89f6929-5688-4695-a589-2103e1d61cac","Type":"ContainerStarted","Data":"95ffa48e09d1cb20fd9f6e505b56eb7b6c94a43084ed2c7168d84bfecda4d8c3"} Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.794842 4695 generic.go:334] "Generic (PLEG): container finished" podID="efe5550e-649d-454e-95e1-af534e2462cc" containerID="38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880" exitCode=0 Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.794908 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.794909 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe5550e-649d-454e-95e1-af534e2462cc","Type":"ContainerDied","Data":"38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880"} Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.795023 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"efe5550e-649d-454e-95e1-af534e2462cc","Type":"ContainerDied","Data":"8a2f9b715148e6d09399206c1ee751e2c7c92a9c703b32e7020d332cefe5662d"} Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.795059 4695 scope.go:117] "RemoveContainer" containerID="38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.799412 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerStarted","Data":"766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3"} Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.812737 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/efe5550e-649d-454e-95e1-af534e2462cc-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.812760 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.812770 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efe5550e-649d-454e-95e1-af534e2462cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.812778 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c62kc\" (UniqueName: \"kubernetes.io/projected/efe5550e-649d-454e-95e1-af534e2462cc-kube-api-access-c62kc\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.832784 4695 scope.go:117] "RemoveContainer" containerID="ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.834208 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.843849 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.853008 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:17 crc kubenswrapper[4695]: E1126 13:47:17.853419 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-log" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.853433 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-log" Nov 26 13:47:17 crc kubenswrapper[4695]: E1126 13:47:17.853458 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-api" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.853464 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-api" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.853656 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-api" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.853690 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe5550e-649d-454e-95e1-af534e2462cc" containerName="nova-api-log" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.855025 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.857639 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.871443 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.874381 4695 scope.go:117] "RemoveContainer" containerID="38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880" Nov 26 13:47:17 crc kubenswrapper[4695]: E1126 13:47:17.874792 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880\": container with ID starting with 38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880 not found: ID does not exist" containerID="38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.874949 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880"} err="failed to get container status \"38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880\": rpc error: code = NotFound desc = could not find container \"38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880\": container with ID starting with 38908498f1f67a021598682ce0cca10948e569b2d2ed65bc4d5a6d3dfa49f880 not found: ID does not exist" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.875061 4695 scope.go:117] "RemoveContainer" containerID="ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d" Nov 26 13:47:17 crc kubenswrapper[4695]: E1126 13:47:17.875553 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d\": container with ID starting with ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d not found: ID does not exist" containerID="ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d" Nov 26 13:47:17 crc kubenswrapper[4695]: I1126 13:47:17.875582 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d"} err="failed to get container status \"ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d\": rpc error: code = NotFound desc = could not find container \"ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d\": container with ID starting with ebaa1d40638df105a85263686eb17764ac3ca7653b71da6beb6492abb6f2d84d not found: ID does not exist" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.015819 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.016045 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a752ce-d496-4008-992f-083103493b3c-logs\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.016107 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62tp9\" (UniqueName: \"kubernetes.io/projected/f4a752ce-d496-4008-992f-083103493b3c-kube-api-access-62tp9\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.016138 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-config-data\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.038769 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.117623 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62tp9\" (UniqueName: \"kubernetes.io/projected/f4a752ce-d496-4008-992f-083103493b3c-kube-api-access-62tp9\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.117699 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-config-data\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.117821 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.117854 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a752ce-d496-4008-992f-083103493b3c-logs\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.120829 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a752ce-d496-4008-992f-083103493b3c-logs\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.125258 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-config-data\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.126953 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.143852 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62tp9\" (UniqueName: \"kubernetes.io/projected/f4a752ce-d496-4008-992f-083103493b3c-kube-api-access-62tp9\") pod \"nova-api-0\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.174448 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.601819 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:18 crc kubenswrapper[4695]: W1126 13:47:18.611058 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a752ce_d496_4008_992f_083103493b3c.slice/crio-d02d954cead27815dee0c609d348b5315a8bdd5becdd2678005343ac19cca0da WatchSource:0}: Error finding container d02d954cead27815dee0c609d348b5315a8bdd5becdd2678005343ac19cca0da: Status 404 returned error can't find the container with id d02d954cead27815dee0c609d348b5315a8bdd5becdd2678005343ac19cca0da Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.813859 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerStarted","Data":"1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d"} Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.816762 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89f6929-5688-4695-a589-2103e1d61cac","Type":"ContainerStarted","Data":"ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23"} Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.818372 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f4a752ce-d496-4008-992f-083103493b3c","Type":"ContainerStarted","Data":"a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80"} Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.818407 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f4a752ce-d496-4008-992f-083103493b3c","Type":"ContainerStarted","Data":"d02d954cead27815dee0c609d348b5315a8bdd5becdd2678005343ac19cca0da"} Nov 26 13:47:18 crc kubenswrapper[4695]: I1126 13:47:18.840026 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.840005101 podStartE2EDuration="2.840005101s" podCreationTimestamp="2025-11-26 13:47:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:18.833498574 +0000 UTC m=+1422.469323676" watchObservedRunningTime="2025-11-26 13:47:18.840005101 +0000 UTC m=+1422.475830183" Nov 26 13:47:19 crc kubenswrapper[4695]: I1126 13:47:19.180284 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe5550e-649d-454e-95e1-af534e2462cc" path="/var/lib/kubelet/pods/efe5550e-649d-454e-95e1-af534e2462cc/volumes" Nov 26 13:47:19 crc kubenswrapper[4695]: I1126 13:47:19.830159 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f4a752ce-d496-4008-992f-083103493b3c","Type":"ContainerStarted","Data":"ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c"} Nov 26 13:47:19 crc kubenswrapper[4695]: I1126 13:47:19.832543 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerStarted","Data":"ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be"} Nov 26 13:47:20 crc kubenswrapper[4695]: I1126 13:47:20.277532 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 26 13:47:20 crc kubenswrapper[4695]: I1126 13:47:20.295781 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.295763732 podStartE2EDuration="3.295763732s" podCreationTimestamp="2025-11-26 13:47:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:19.850094154 +0000 UTC m=+1423.485919246" watchObservedRunningTime="2025-11-26 13:47:20.295763732 +0000 UTC m=+1423.931588814" Nov 26 13:47:21 crc kubenswrapper[4695]: I1126 13:47:21.861437 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerStarted","Data":"4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b"} Nov 26 13:47:21 crc kubenswrapper[4695]: I1126 13:47:21.861691 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 13:47:21 crc kubenswrapper[4695]: I1126 13:47:21.889685 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.751493611 podStartE2EDuration="6.889669775s" podCreationTimestamp="2025-11-26 13:47:15 +0000 UTC" firstStartedPulling="2025-11-26 13:47:16.61684407 +0000 UTC m=+1420.252669152" lastFinishedPulling="2025-11-26 13:47:20.755020234 +0000 UTC m=+1424.390845316" observedRunningTime="2025-11-26 13:47:21.88228389 +0000 UTC m=+1425.518108972" watchObservedRunningTime="2025-11-26 13:47:21.889669775 +0000 UTC m=+1425.525494857" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.062456 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mmj6w"] Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.064712 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.068942 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mmj6w"] Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.130337 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-utilities\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.130499 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-catalog-content\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.130705 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcdzv\" (UniqueName: \"kubernetes.io/projected/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-kube-api-access-wcdzv\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.162115 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.232430 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcdzv\" (UniqueName: \"kubernetes.io/projected/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-kube-api-access-wcdzv\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.232581 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-utilities\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.232663 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-catalog-content\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.233365 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-utilities\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.233413 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-catalog-content\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.250149 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcdzv\" (UniqueName: \"kubernetes.io/projected/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-kube-api-access-wcdzv\") pod \"certified-operators-mmj6w\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.388522 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:22 crc kubenswrapper[4695]: W1126 13:47:22.877073 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d1ec5ab_0a96_47f3_a1f8_80e672db0567.slice/crio-5cc5fac5695a3245d39b07f288cee175c5723a21ca66691f3206763a518108b9 WatchSource:0}: Error finding container 5cc5fac5695a3245d39b07f288cee175c5723a21ca66691f3206763a518108b9: Status 404 returned error can't find the container with id 5cc5fac5695a3245d39b07f288cee175c5723a21ca66691f3206763a518108b9 Nov 26 13:47:22 crc kubenswrapper[4695]: I1126 13:47:22.879199 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mmj6w"] Nov 26 13:47:23 crc kubenswrapper[4695]: I1126 13:47:23.880124 4695 generic.go:334] "Generic (PLEG): container finished" podID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerID="27e465095f95ecf620a7c52f2d71ad604a793e3ed02fd2999c9a9817da63664e" exitCode=0 Nov 26 13:47:23 crc kubenswrapper[4695]: I1126 13:47:23.880183 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmj6w" event={"ID":"5d1ec5ab-0a96-47f3-a1f8-80e672db0567","Type":"ContainerDied","Data":"27e465095f95ecf620a7c52f2d71ad604a793e3ed02fd2999c9a9817da63664e"} Nov 26 13:47:23 crc kubenswrapper[4695]: I1126 13:47:23.880520 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmj6w" event={"ID":"5d1ec5ab-0a96-47f3-a1f8-80e672db0567","Type":"ContainerStarted","Data":"5cc5fac5695a3245d39b07f288cee175c5723a21ca66691f3206763a518108b9"} Nov 26 13:47:24 crc kubenswrapper[4695]: I1126 13:47:24.892668 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmj6w" event={"ID":"5d1ec5ab-0a96-47f3-a1f8-80e672db0567","Type":"ContainerStarted","Data":"8dfd254f221d82c6f370ad1435107d7637f0228226db3a6b38355b0c89b4add9"} Nov 26 13:47:25 crc kubenswrapper[4695]: I1126 13:47:25.904706 4695 generic.go:334] "Generic (PLEG): container finished" podID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerID="8dfd254f221d82c6f370ad1435107d7637f0228226db3a6b38355b0c89b4add9" exitCode=0 Nov 26 13:47:25 crc kubenswrapper[4695]: I1126 13:47:25.904751 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmj6w" event={"ID":"5d1ec5ab-0a96-47f3-a1f8-80e672db0567","Type":"ContainerDied","Data":"8dfd254f221d82c6f370ad1435107d7637f0228226db3a6b38355b0c89b4add9"} Nov 26 13:47:27 crc kubenswrapper[4695]: I1126 13:47:27.175330 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 13:47:27 crc kubenswrapper[4695]: I1126 13:47:27.202996 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 13:47:27 crc kubenswrapper[4695]: I1126 13:47:27.923331 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmj6w" event={"ID":"5d1ec5ab-0a96-47f3-a1f8-80e672db0567","Type":"ContainerStarted","Data":"c91ef33be91455eb3e43d14b117d1d87239f8733fa19cb9d14ca163dc053fc1f"} Nov 26 13:47:27 crc kubenswrapper[4695]: I1126 13:47:27.960433 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 13:47:27 crc kubenswrapper[4695]: I1126 13:47:27.965862 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mmj6w" podStartSLOduration=2.84244341 podStartE2EDuration="5.965831063s" podCreationTimestamp="2025-11-26 13:47:22 +0000 UTC" firstStartedPulling="2025-11-26 13:47:23.8820389 +0000 UTC m=+1427.517863982" lastFinishedPulling="2025-11-26 13:47:27.005426543 +0000 UTC m=+1430.641251635" observedRunningTime="2025-11-26 13:47:27.951714844 +0000 UTC m=+1431.587539926" watchObservedRunningTime="2025-11-26 13:47:27.965831063 +0000 UTC m=+1431.601656215" Nov 26 13:47:28 crc kubenswrapper[4695]: I1126 13:47:28.175488 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 13:47:28 crc kubenswrapper[4695]: I1126 13:47:28.175544 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 13:47:29 crc kubenswrapper[4695]: I1126 13:47:29.258584 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 13:47:29 crc kubenswrapper[4695]: I1126 13:47:29.258637 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 26 13:47:32 crc kubenswrapper[4695]: I1126 13:47:32.389527 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:32 crc kubenswrapper[4695]: I1126 13:47:32.389850 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:32 crc kubenswrapper[4695]: I1126 13:47:32.440178 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:33 crc kubenswrapper[4695]: I1126 13:47:33.013614 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:33 crc kubenswrapper[4695]: I1126 13:47:33.062229 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mmj6w"] Nov 26 13:47:34 crc kubenswrapper[4695]: I1126 13:47:34.977255 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:47:34 crc kubenswrapper[4695]: I1126 13:47:34.986802 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:34 crc kubenswrapper[4695]: I1126 13:47:34.994554 4695 generic.go:334] "Generic (PLEG): container finished" podID="69c7a964-94f7-4640-9f24-49dc0736047a" containerID="7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c" exitCode=137 Nov 26 13:47:34 crc kubenswrapper[4695]: I1126 13:47:34.994667 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:47:34 crc kubenswrapper[4695]: I1126 13:47:34.994690 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"69c7a964-94f7-4640-9f24-49dc0736047a","Type":"ContainerDied","Data":"7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c"} Nov 26 13:47:34 crc kubenswrapper[4695]: I1126 13:47:34.995076 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"69c7a964-94f7-4640-9f24-49dc0736047a","Type":"ContainerDied","Data":"f58d3faf090f545d1aa607df026023710a1a4164061078dc54f1a27a2c58c0b9"} Nov 26 13:47:34 crc kubenswrapper[4695]: I1126 13:47:34.995137 4695 scope.go:117] "RemoveContainer" containerID="7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.006708 4695 generic.go:334] "Generic (PLEG): container finished" podID="671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" containerID="51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4" exitCode=137 Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.006747 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca","Type":"ContainerDied","Data":"51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4"} Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.006803 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.006785 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca","Type":"ContainerDied","Data":"6ae1021948bf64d2c6691890abceb772fcd0e3e5d1f498e25179c68f4d44e9fd"} Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.007288 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mmj6w" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="registry-server" containerID="cri-o://c91ef33be91455eb3e43d14b117d1d87239f8733fa19cb9d14ca163dc053fc1f" gracePeriod=2 Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.041474 4695 scope.go:117] "RemoveContainer" containerID="dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.068067 4695 scope.go:117] "RemoveContainer" containerID="7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c" Nov 26 13:47:35 crc kubenswrapper[4695]: E1126 13:47:35.068773 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c\": container with ID starting with 7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c not found: ID does not exist" containerID="7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.068838 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c"} err="failed to get container status \"7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c\": rpc error: code = NotFound desc = could not find container \"7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c\": container with ID starting with 7f593d3fef2281b7adc44e233d12149a2cc7421d49f3a467deb940e92e153d9c not found: ID does not exist" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.068870 4695 scope.go:117] "RemoveContainer" containerID="dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50" Nov 26 13:47:35 crc kubenswrapper[4695]: E1126 13:47:35.069273 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50\": container with ID starting with dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50 not found: ID does not exist" containerID="dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.069309 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50"} err="failed to get container status \"dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50\": rpc error: code = NotFound desc = could not find container \"dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50\": container with ID starting with dc92e4485e2a0eb1037eb2ca3db86f7c09c7fb682a999d38d80238d27d78cd50 not found: ID does not exist" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.069329 4695 scope.go:117] "RemoveContainer" containerID="51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.080860 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-config-data\") pod \"69c7a964-94f7-4640-9f24-49dc0736047a\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.080966 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwpmw\" (UniqueName: \"kubernetes.io/projected/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-kube-api-access-gwpmw\") pod \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.081103 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-combined-ca-bundle\") pod \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.081204 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-config-data\") pod \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\" (UID: \"671b4178-bcac-4fa4-8f6c-cb0c3163f3ca\") " Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.081257 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmr6k\" (UniqueName: \"kubernetes.io/projected/69c7a964-94f7-4640-9f24-49dc0736047a-kube-api-access-jmr6k\") pod \"69c7a964-94f7-4640-9f24-49dc0736047a\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.081326 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-combined-ca-bundle\") pod \"69c7a964-94f7-4640-9f24-49dc0736047a\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.081372 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c7a964-94f7-4640-9f24-49dc0736047a-logs\") pod \"69c7a964-94f7-4640-9f24-49dc0736047a\" (UID: \"69c7a964-94f7-4640-9f24-49dc0736047a\") " Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.081877 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69c7a964-94f7-4640-9f24-49dc0736047a-logs" (OuterVolumeSpecName: "logs") pod "69c7a964-94f7-4640-9f24-49dc0736047a" (UID: "69c7a964-94f7-4640-9f24-49dc0736047a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.087544 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-kube-api-access-gwpmw" (OuterVolumeSpecName: "kube-api-access-gwpmw") pod "671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" (UID: "671b4178-bcac-4fa4-8f6c-cb0c3163f3ca"). InnerVolumeSpecName "kube-api-access-gwpmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.093080 4695 scope.go:117] "RemoveContainer" containerID="51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4" Nov 26 13:47:35 crc kubenswrapper[4695]: E1126 13:47:35.093411 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4\": container with ID starting with 51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4 not found: ID does not exist" containerID="51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.093448 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4"} err="failed to get container status \"51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4\": rpc error: code = NotFound desc = could not find container \"51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4\": container with ID starting with 51fa7405f8084d98f5c99a87f187f690485cf69582a4c27d4cacb660f2d968c4 not found: ID does not exist" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.093612 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69c7a964-94f7-4640-9f24-49dc0736047a-kube-api-access-jmr6k" (OuterVolumeSpecName: "kube-api-access-jmr6k") pod "69c7a964-94f7-4640-9f24-49dc0736047a" (UID: "69c7a964-94f7-4640-9f24-49dc0736047a"). InnerVolumeSpecName "kube-api-access-jmr6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.110566 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-config-data" (OuterVolumeSpecName: "config-data") pod "671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" (UID: "671b4178-bcac-4fa4-8f6c-cb0c3163f3ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.112195 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-config-data" (OuterVolumeSpecName: "config-data") pod "69c7a964-94f7-4640-9f24-49dc0736047a" (UID: "69c7a964-94f7-4640-9f24-49dc0736047a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.113374 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" (UID: "671b4178-bcac-4fa4-8f6c-cb0c3163f3ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.119931 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69c7a964-94f7-4640-9f24-49dc0736047a" (UID: "69c7a964-94f7-4640-9f24-49dc0736047a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.182793 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.182825 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwpmw\" (UniqueName: \"kubernetes.io/projected/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-kube-api-access-gwpmw\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.182836 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.182846 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.182855 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmr6k\" (UniqueName: \"kubernetes.io/projected/69c7a964-94f7-4640-9f24-49dc0736047a-kube-api-access-jmr6k\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.182866 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69c7a964-94f7-4640-9f24-49dc0736047a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.182875 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/69c7a964-94f7-4640-9f24-49dc0736047a-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.316078 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.326525 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.338741 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.350633 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.390384 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: E1126 13:47:35.390792 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-metadata" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.390824 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-metadata" Nov 26 13:47:35 crc kubenswrapper[4695]: E1126 13:47:35.390869 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.390877 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 13:47:35 crc kubenswrapper[4695]: E1126 13:47:35.390893 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-log" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.390900 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-log" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.391136 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" containerName="nova-cell1-novncproxy-novncproxy" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.391164 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-log" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.391187 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" containerName="nova-metadata-metadata" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.392399 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.396756 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.396947 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.399966 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.407098 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.408301 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.410675 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.410830 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.410957 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.414123 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.500477 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.500531 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ss6l\" (UniqueName: \"kubernetes.io/projected/e7e4f67c-33d9-4198-9941-5bca025cc766-kube-api-access-8ss6l\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.500610 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e4f67c-33d9-4198-9941-5bca025cc766-logs\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.500661 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.501695 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-config-data\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603223 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603264 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ss6l\" (UniqueName: \"kubernetes.io/projected/e7e4f67c-33d9-4198-9941-5bca025cc766-kube-api-access-8ss6l\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603290 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603330 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603663 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e4f67c-33d9-4198-9941-5bca025cc766-logs\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603797 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603866 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603897 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-config-data\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.603946 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbf4w\" (UniqueName: \"kubernetes.io/projected/0ddfca26-214e-472a-90ca-e0088717125e-kube-api-access-nbf4w\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.604033 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e4f67c-33d9-4198-9941-5bca025cc766-logs\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.604059 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.608368 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.609132 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-config-data\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.616506 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.618413 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ss6l\" (UniqueName: \"kubernetes.io/projected/e7e4f67c-33d9-4198-9941-5bca025cc766-kube-api-access-8ss6l\") pod \"nova-metadata-0\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.705668 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.705725 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.705785 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.705809 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbf4w\" (UniqueName: \"kubernetes.io/projected/0ddfca26-214e-472a-90ca-e0088717125e-kube-api-access-nbf4w\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.705837 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.708880 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.708989 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.709788 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.711137 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ddfca26-214e-472a-90ca-e0088717125e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.718786 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.724390 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbf4w\" (UniqueName: \"kubernetes.io/projected/0ddfca26-214e-472a-90ca-e0088717125e-kube-api-access-nbf4w\") pod \"nova-cell1-novncproxy-0\" (UID: \"0ddfca26-214e-472a-90ca-e0088717125e\") " pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:35 crc kubenswrapper[4695]: I1126 13:47:35.729596 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.025799 4695 generic.go:334] "Generic (PLEG): container finished" podID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerID="c91ef33be91455eb3e43d14b117d1d87239f8733fa19cb9d14ca163dc053fc1f" exitCode=0 Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.025856 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmj6w" event={"ID":"5d1ec5ab-0a96-47f3-a1f8-80e672db0567","Type":"ContainerDied","Data":"c91ef33be91455eb3e43d14b117d1d87239f8733fa19cb9d14ca163dc053fc1f"} Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.155689 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:36 crc kubenswrapper[4695]: W1126 13:47:36.161821 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7e4f67c_33d9_4198_9941_5bca025cc766.slice/crio-eb2bf12cd897add7e33140f7b45aaa88d26121b897dab7b256029b712292d9c5 WatchSource:0}: Error finding container eb2bf12cd897add7e33140f7b45aaa88d26121b897dab7b256029b712292d9c5: Status 404 returned error can't find the container with id eb2bf12cd897add7e33140f7b45aaa88d26121b897dab7b256029b712292d9c5 Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.222761 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.721986 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.835725 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-utilities\") pod \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.836051 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-catalog-content\") pod \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.836096 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcdzv\" (UniqueName: \"kubernetes.io/projected/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-kube-api-access-wcdzv\") pod \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\" (UID: \"5d1ec5ab-0a96-47f3-a1f8-80e672db0567\") " Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.837202 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-utilities" (OuterVolumeSpecName: "utilities") pod "5d1ec5ab-0a96-47f3-a1f8-80e672db0567" (UID: "5d1ec5ab-0a96-47f3-a1f8-80e672db0567"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.845713 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-kube-api-access-wcdzv" (OuterVolumeSpecName: "kube-api-access-wcdzv") pod "5d1ec5ab-0a96-47f3-a1f8-80e672db0567" (UID: "5d1ec5ab-0a96-47f3-a1f8-80e672db0567"). InnerVolumeSpecName "kube-api-access-wcdzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.892361 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d1ec5ab-0a96-47f3-a1f8-80e672db0567" (UID: "5d1ec5ab-0a96-47f3-a1f8-80e672db0567"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.937913 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.937945 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:36 crc kubenswrapper[4695]: I1126 13:47:36.937956 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcdzv\" (UniqueName: \"kubernetes.io/projected/5d1ec5ab-0a96-47f3-a1f8-80e672db0567-kube-api-access-wcdzv\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.042251 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mmj6w" event={"ID":"5d1ec5ab-0a96-47f3-a1f8-80e672db0567","Type":"ContainerDied","Data":"5cc5fac5695a3245d39b07f288cee175c5723a21ca66691f3206763a518108b9"} Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.042286 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mmj6w" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.042315 4695 scope.go:117] "RemoveContainer" containerID="c91ef33be91455eb3e43d14b117d1d87239f8733fa19cb9d14ca163dc053fc1f" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.045976 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ddfca26-214e-472a-90ca-e0088717125e","Type":"ContainerStarted","Data":"a61af42f760ee67c737c64e6c529fa5e7eb915916d51144fefebb1fd4525741e"} Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.046051 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"0ddfca26-214e-472a-90ca-e0088717125e","Type":"ContainerStarted","Data":"f9d3d1f358d847ab03ce46785dc3175f1fde05d4eda0ec4ec0b4999e449f710b"} Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.050756 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7e4f67c-33d9-4198-9941-5bca025cc766","Type":"ContainerStarted","Data":"296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226"} Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.050802 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7e4f67c-33d9-4198-9941-5bca025cc766","Type":"ContainerStarted","Data":"ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c"} Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.050819 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7e4f67c-33d9-4198-9941-5bca025cc766","Type":"ContainerStarted","Data":"eb2bf12cd897add7e33140f7b45aaa88d26121b897dab7b256029b712292d9c5"} Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.071178 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.071156599 podStartE2EDuration="2.071156599s" podCreationTimestamp="2025-11-26 13:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:37.061051078 +0000 UTC m=+1440.696876160" watchObservedRunningTime="2025-11-26 13:47:37.071156599 +0000 UTC m=+1440.706981681" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.074335 4695 scope.go:117] "RemoveContainer" containerID="8dfd254f221d82c6f370ad1435107d7637f0228226db3a6b38355b0c89b4add9" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.100696 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.100682568 podStartE2EDuration="2.100682568s" podCreationTimestamp="2025-11-26 13:47:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:37.100065718 +0000 UTC m=+1440.735890800" watchObservedRunningTime="2025-11-26 13:47:37.100682568 +0000 UTC m=+1440.736507650" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.104780 4695 scope.go:117] "RemoveContainer" containerID="27e465095f95ecf620a7c52f2d71ad604a793e3ed02fd2999c9a9817da63664e" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.121727 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mmj6w"] Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.132163 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mmj6w"] Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.175582 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" path="/var/lib/kubelet/pods/5d1ec5ab-0a96-47f3-a1f8-80e672db0567/volumes" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.176260 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="671b4178-bcac-4fa4-8f6c-cb0c3163f3ca" path="/var/lib/kubelet/pods/671b4178-bcac-4fa4-8f6c-cb0c3163f3ca/volumes" Nov 26 13:47:37 crc kubenswrapper[4695]: I1126 13:47:37.177311 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69c7a964-94f7-4640-9f24-49dc0736047a" path="/var/lib/kubelet/pods/69c7a964-94f7-4640-9f24-49dc0736047a/volumes" Nov 26 13:47:38 crc kubenswrapper[4695]: I1126 13:47:38.179558 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 13:47:38 crc kubenswrapper[4695]: I1126 13:47:38.180618 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 13:47:38 crc kubenswrapper[4695]: I1126 13:47:38.184009 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 13:47:38 crc kubenswrapper[4695]: I1126 13:47:38.190756 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.077521 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.081526 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.284327 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-vh7hj"] Nov 26 13:47:39 crc kubenswrapper[4695]: E1126 13:47:39.284734 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="registry-server" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.284745 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="registry-server" Nov 26 13:47:39 crc kubenswrapper[4695]: E1126 13:47:39.284755 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="extract-utilities" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.284761 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="extract-utilities" Nov 26 13:47:39 crc kubenswrapper[4695]: E1126 13:47:39.284781 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="extract-content" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.284787 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="extract-content" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.284969 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d1ec5ab-0a96-47f3-a1f8-80e672db0567" containerName="registry-server" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.285967 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.296940 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-vh7hj"] Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.483088 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gsmg\" (UniqueName: \"kubernetes.io/projected/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-kube-api-access-4gsmg\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.483163 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.483313 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-config\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.483410 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.483448 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.483679 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.585774 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gsmg\" (UniqueName: \"kubernetes.io/projected/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-kube-api-access-4gsmg\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.585861 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.585936 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-config\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.585967 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.585986 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.586018 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.587417 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.587531 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.587615 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.587715 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-config\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.588437 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.613889 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gsmg\" (UniqueName: \"kubernetes.io/projected/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-kube-api-access-4gsmg\") pod \"dnsmasq-dns-5c7b6c5df9-vh7hj\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:39 crc kubenswrapper[4695]: I1126 13:47:39.912537 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:40 crc kubenswrapper[4695]: I1126 13:47:40.375628 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-vh7hj"] Nov 26 13:47:40 crc kubenswrapper[4695]: I1126 13:47:40.720004 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 13:47:40 crc kubenswrapper[4695]: I1126 13:47:40.721389 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 13:47:40 crc kubenswrapper[4695]: I1126 13:47:40.730286 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.101550 4695 generic.go:334] "Generic (PLEG): container finished" podID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerID="54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b" exitCode=0 Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.102803 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" event={"ID":"2229721b-1c0b-4ebb-b51c-8e74fc407cbe","Type":"ContainerDied","Data":"54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b"} Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.102832 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" event={"ID":"2229721b-1c0b-4ebb-b51c-8e74fc407cbe","Type":"ContainerStarted","Data":"bc2dd3c8fd02a410ae2d4b0810f11c093bddac50639e7b5293a4511ad6b86f33"} Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.299617 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.299939 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-central-agent" containerID="cri-o://766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3" gracePeriod=30 Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.300071 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="proxy-httpd" containerID="cri-o://4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b" gracePeriod=30 Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.300109 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="sg-core" containerID="cri-o://ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be" gracePeriod=30 Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.300169 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-notification-agent" containerID="cri-o://1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d" gracePeriod=30 Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.306365 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.195:3000/\": read tcp 10.217.0.2:46694->10.217.0.195:3000: read: connection reset by peer" Nov 26 13:47:41 crc kubenswrapper[4695]: I1126 13:47:41.473439 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.111878 4695 generic.go:334] "Generic (PLEG): container finished" podID="04061804-12eb-4a78-b468-3008a2054b50" containerID="4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b" exitCode=0 Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.111912 4695 generic.go:334] "Generic (PLEG): container finished" podID="04061804-12eb-4a78-b468-3008a2054b50" containerID="ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be" exitCode=2 Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.111923 4695 generic.go:334] "Generic (PLEG): container finished" podID="04061804-12eb-4a78-b468-3008a2054b50" containerID="766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3" exitCode=0 Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.111917 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerDied","Data":"4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b"} Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.111963 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerDied","Data":"ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be"} Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.112294 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerDied","Data":"766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3"} Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.114561 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" event={"ID":"2229721b-1c0b-4ebb-b51c-8e74fc407cbe","Type":"ContainerStarted","Data":"8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2"} Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.114698 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-log" containerID="cri-o://a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80" gracePeriod=30 Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.114750 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-api" containerID="cri-o://ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c" gracePeriod=30 Nov 26 13:47:42 crc kubenswrapper[4695]: I1126 13:47:42.141453 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" podStartSLOduration=3.141437421 podStartE2EDuration="3.141437421s" podCreationTimestamp="2025-11-26 13:47:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:42.139610364 +0000 UTC m=+1445.775435446" watchObservedRunningTime="2025-11-26 13:47:42.141437421 +0000 UTC m=+1445.777262503" Nov 26 13:47:43 crc kubenswrapper[4695]: I1126 13:47:43.124835 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4a752ce-d496-4008-992f-083103493b3c" containerID="a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80" exitCode=143 Nov 26 13:47:43 crc kubenswrapper[4695]: I1126 13:47:43.124925 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f4a752ce-d496-4008-992f-083103493b3c","Type":"ContainerDied","Data":"a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80"} Nov 26 13:47:43 crc kubenswrapper[4695]: I1126 13:47:43.125397 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.774009 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.887564 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-config-data\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.887934 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6mzk\" (UniqueName: \"kubernetes.io/projected/04061804-12eb-4a78-b468-3008a2054b50-kube-api-access-f6mzk\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888037 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-combined-ca-bundle\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888057 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-run-httpd\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888105 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-log-httpd\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888193 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-scripts\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888612 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888642 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888769 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-sg-core-conf-yaml\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.888803 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-ceilometer-tls-certs\") pod \"04061804-12eb-4a78-b468-3008a2054b50\" (UID: \"04061804-12eb-4a78-b468-3008a2054b50\") " Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.889676 4695 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.889702 4695 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/04061804-12eb-4a78-b468-3008a2054b50-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.893805 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-scripts" (OuterVolumeSpecName: "scripts") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.896207 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04061804-12eb-4a78-b468-3008a2054b50-kube-api-access-f6mzk" (OuterVolumeSpecName: "kube-api-access-f6mzk") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "kube-api-access-f6mzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.932700 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.946405 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.966639 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.990203 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-config-data" (OuterVolumeSpecName: "config-data") pod "04061804-12eb-4a78-b468-3008a2054b50" (UID: "04061804-12eb-4a78-b468-3008a2054b50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.990811 4695 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.990905 4695 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.990982 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.991067 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6mzk\" (UniqueName: \"kubernetes.io/projected/04061804-12eb-4a78-b468-3008a2054b50-kube-api-access-f6mzk\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.991153 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:44 crc kubenswrapper[4695]: I1126 13:47:44.991226 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04061804-12eb-4a78-b468-3008a2054b50-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.144617 4695 generic.go:334] "Generic (PLEG): container finished" podID="04061804-12eb-4a78-b468-3008a2054b50" containerID="1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d" exitCode=0 Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.144673 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.144710 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerDied","Data":"1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d"} Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.144780 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"04061804-12eb-4a78-b468-3008a2054b50","Type":"ContainerDied","Data":"64d6b67fdb86fbe2609dae4b358332d016bca9901f1867010c32335c07ace272"} Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.144811 4695 scope.go:117] "RemoveContainer" containerID="4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.172728 4695 scope.go:117] "RemoveContainer" containerID="ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.207542 4695 scope.go:117] "RemoveContainer" containerID="1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.222817 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.244787 4695 scope.go:117] "RemoveContainer" containerID="766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.245684 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.271648 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.272291 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="proxy-httpd" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272315 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="proxy-httpd" Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.272335 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-notification-agent" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272362 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-notification-agent" Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.272371 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="sg-core" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272380 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="sg-core" Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.272402 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-central-agent" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272410 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-central-agent" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272666 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-notification-agent" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272688 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="proxy-httpd" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272701 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="sg-core" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.272732 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="04061804-12eb-4a78-b468-3008a2054b50" containerName="ceilometer-central-agent" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.275753 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.277465 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.278507 4695 scope.go:117] "RemoveContainer" containerID="4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.278872 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.279016 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b\": container with ID starting with 4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b not found: ID does not exist" containerID="4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.279092 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.279091 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b"} err="failed to get container status \"4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b\": rpc error: code = NotFound desc = could not find container \"4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b\": container with ID starting with 4fd32bb220256b79671faff513a207872f700f090450ccadcbefb84caa2fd48b not found: ID does not exist" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.279164 4695 scope.go:117] "RemoveContainer" containerID="ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be" Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.279615 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be\": container with ID starting with ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be not found: ID does not exist" containerID="ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.279695 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be"} err="failed to get container status \"ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be\": rpc error: code = NotFound desc = could not find container \"ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be\": container with ID starting with ee1c64824dd82e679203c2b55a098024225a8d8b038c547c7f1285976d29e6be not found: ID does not exist" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.279761 4695 scope.go:117] "RemoveContainer" containerID="1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d" Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.280505 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d\": container with ID starting with 1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d not found: ID does not exist" containerID="1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.280556 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d"} err="failed to get container status \"1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d\": rpc error: code = NotFound desc = could not find container \"1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d\": container with ID starting with 1daa44a90fca7457f2d19ebfadf05d3a4a696ff6088f8dccee84d98602d4e17d not found: ID does not exist" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.280589 4695 scope.go:117] "RemoveContainer" containerID="766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3" Nov 26 13:47:45 crc kubenswrapper[4695]: E1126 13:47:45.283637 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3\": container with ID starting with 766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3 not found: ID does not exist" containerID="766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.283679 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3"} err="failed to get container status \"766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3\": rpc error: code = NotFound desc = could not find container \"766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3\": container with ID starting with 766a0f674cf48eb417e5e9b8c879277757cbad54b83ff35541c993db258573f3 not found: ID does not exist" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.283874 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.399553 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.399680 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.399724 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv489\" (UniqueName: \"kubernetes.io/projected/0b694fa3-bda0-4522-bc11-61c47db527af-kube-api-access-jv489\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.399810 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b694fa3-bda0-4522-bc11-61c47db527af-run-httpd\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.399863 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.399887 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-scripts\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.399966 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b694fa3-bda0-4522-bc11-61c47db527af-log-httpd\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.400042 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-config-data\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501038 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv489\" (UniqueName: \"kubernetes.io/projected/0b694fa3-bda0-4522-bc11-61c47db527af-kube-api-access-jv489\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501106 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b694fa3-bda0-4522-bc11-61c47db527af-run-httpd\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501126 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501151 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-scripts\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501175 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b694fa3-bda0-4522-bc11-61c47db527af-log-httpd\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501223 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-config-data\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501276 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.501317 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.502021 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b694fa3-bda0-4522-bc11-61c47db527af-log-httpd\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.502299 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b694fa3-bda0-4522-bc11-61c47db527af-run-httpd\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.505942 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.506651 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.507310 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.507339 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-scripts\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.507727 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b694fa3-bda0-4522-bc11-61c47db527af-config-data\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.517844 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv489\" (UniqueName: \"kubernetes.io/projected/0b694fa3-bda0-4522-bc11-61c47db527af-kube-api-access-jv489\") pod \"ceilometer-0\" (UID: \"0b694fa3-bda0-4522-bc11-61c47db527af\") " pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.591779 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.719540 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.719871 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.730050 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:45 crc kubenswrapper[4695]: I1126 13:47:45.768803 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.073830 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 26 13:47:47 crc kubenswrapper[4695]: W1126 13:47:46.078705 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b694fa3_bda0_4522_bc11_61c47db527af.slice/crio-4cdb2b1f2d6dad4a30e210ac3aee0d7e3a6db8e65efaeb26b031ba87acf8ba62 WatchSource:0}: Error finding container 4cdb2b1f2d6dad4a30e210ac3aee0d7e3a6db8e65efaeb26b031ba87acf8ba62: Status 404 returned error can't find the container with id 4cdb2b1f2d6dad4a30e210ac3aee0d7e3a6db8e65efaeb26b031ba87acf8ba62 Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.122331 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.176141 4695 generic.go:334] "Generic (PLEG): container finished" podID="f4a752ce-d496-4008-992f-083103493b3c" containerID="ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c" exitCode=0 Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.176209 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f4a752ce-d496-4008-992f-083103493b3c","Type":"ContainerDied","Data":"ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c"} Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.176239 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f4a752ce-d496-4008-992f-083103493b3c","Type":"ContainerDied","Data":"d02d954cead27815dee0c609d348b5315a8bdd5becdd2678005343ac19cca0da"} Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.176259 4695 scope.go:117] "RemoveContainer" containerID="ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.176398 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.179790 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b694fa3-bda0-4522-bc11-61c47db527af","Type":"ContainerStarted","Data":"4cdb2b1f2d6dad4a30e210ac3aee0d7e3a6db8e65efaeb26b031ba87acf8ba62"} Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.211840 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.223766 4695 scope.go:117] "RemoveContainer" containerID="a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.223992 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62tp9\" (UniqueName: \"kubernetes.io/projected/f4a752ce-d496-4008-992f-083103493b3c-kube-api-access-62tp9\") pod \"f4a752ce-d496-4008-992f-083103493b3c\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.224192 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a752ce-d496-4008-992f-083103493b3c-logs\") pod \"f4a752ce-d496-4008-992f-083103493b3c\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.224223 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-config-data\") pod \"f4a752ce-d496-4008-992f-083103493b3c\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.224294 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-combined-ca-bundle\") pod \"f4a752ce-d496-4008-992f-083103493b3c\" (UID: \"f4a752ce-d496-4008-992f-083103493b3c\") " Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.224889 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4a752ce-d496-4008-992f-083103493b3c-logs" (OuterVolumeSpecName: "logs") pod "f4a752ce-d496-4008-992f-083103493b3c" (UID: "f4a752ce-d496-4008-992f-083103493b3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.240648 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a752ce-d496-4008-992f-083103493b3c-kube-api-access-62tp9" (OuterVolumeSpecName: "kube-api-access-62tp9") pod "f4a752ce-d496-4008-992f-083103493b3c" (UID: "f4a752ce-d496-4008-992f-083103493b3c"). InnerVolumeSpecName "kube-api-access-62tp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.266997 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-config-data" (OuterVolumeSpecName: "config-data") pod "f4a752ce-d496-4008-992f-083103493b3c" (UID: "f4a752ce-d496-4008-992f-083103493b3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.272457 4695 scope.go:117] "RemoveContainer" containerID="ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c" Nov 26 13:47:47 crc kubenswrapper[4695]: E1126 13:47:46.277533 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c\": container with ID starting with ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c not found: ID does not exist" containerID="ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.277585 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c"} err="failed to get container status \"ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c\": rpc error: code = NotFound desc = could not find container \"ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c\": container with ID starting with ff7dc0f92dc003faa1e1e2a3e92ef23f924cd2bc532b5fe8cc63a8a91b30a31c not found: ID does not exist" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.277619 4695 scope.go:117] "RemoveContainer" containerID="a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80" Nov 26 13:47:47 crc kubenswrapper[4695]: E1126 13:47:46.279759 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80\": container with ID starting with a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80 not found: ID does not exist" containerID="a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.279792 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80"} err="failed to get container status \"a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80\": rpc error: code = NotFound desc = could not find container \"a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80\": container with ID starting with a0073aea68cfe3eac837c2ff4fa1cd8ffb9ba6cdc049e747b54734dc42449b80 not found: ID does not exist" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.315243 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4a752ce-d496-4008-992f-083103493b3c" (UID: "f4a752ce-d496-4008-992f-083103493b3c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.330307 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a752ce-d496-4008-992f-083103493b3c-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.330339 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.330379 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a752ce-d496-4008-992f-083103493b3c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.330393 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62tp9\" (UniqueName: \"kubernetes.io/projected/f4a752ce-d496-4008-992f-083103493b3c-kube-api-access-62tp9\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.352128 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-kw4xf"] Nov 26 13:47:47 crc kubenswrapper[4695]: E1126 13:47:46.352545 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-api" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.352559 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-api" Nov 26 13:47:47 crc kubenswrapper[4695]: E1126 13:47:46.352581 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-log" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.352587 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-log" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.352742 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-log" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.352766 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a752ce-d496-4008-992f-083103493b3c" containerName="nova-api-api" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.356764 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.359926 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.360126 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.385793 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw4xf"] Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.431465 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwjvp\" (UniqueName: \"kubernetes.io/projected/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-kube-api-access-rwjvp\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.431548 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-scripts\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.431582 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.431753 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-config-data\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.518097 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.535069 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-scripts\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.535129 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.535176 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-config-data\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.535285 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwjvp\" (UniqueName: \"kubernetes.io/projected/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-kube-api-access-rwjvp\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.538609 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.539193 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.542594 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-scripts\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.543731 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-config-data\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.563028 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.563174 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwjvp\" (UniqueName: \"kubernetes.io/projected/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-kube-api-access-rwjvp\") pod \"nova-cell1-cell-mapping-kw4xf\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.565006 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.568621 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.568828 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.569151 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.580008 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.638501 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-logs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.638704 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.639136 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.639174 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-config-data\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.639213 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-public-tls-certs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.639390 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx7fl\" (UniqueName: \"kubernetes.io/projected/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-kube-api-access-mx7fl\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.677906 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.734560 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.734866 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.741333 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx7fl\" (UniqueName: \"kubernetes.io/projected/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-kube-api-access-mx7fl\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.741434 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-logs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.741519 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.741721 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.741786 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-config-data\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.741820 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-public-tls-certs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.742300 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-logs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.746743 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-config-data\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.747247 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.747840 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-public-tls-certs\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.755932 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.757437 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx7fl\" (UniqueName: \"kubernetes.io/projected/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-kube-api-access-mx7fl\") pod \"nova-api-0\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:46.946483 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:47.216458 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04061804-12eb-4a78-b468-3008a2054b50" path="/var/lib/kubelet/pods/04061804-12eb-4a78-b468-3008a2054b50/volumes" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:47.222465 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a752ce-d496-4008-992f-083103493b3c" path="/var/lib/kubelet/pods/f4a752ce-d496-4008-992f-083103493b3c/volumes" Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:47.264056 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b694fa3-bda0-4522-bc11-61c47db527af","Type":"ContainerStarted","Data":"42875bbf434e8bca4b0e5cae0beb6bc535fb4f0967da643a74237ba9d3c429c2"} Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:47.522452 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw4xf"] Nov 26 13:47:47 crc kubenswrapper[4695]: W1126 13:47:47.524201 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e51ee79_d2dc_4d2a_b41e_e5d8b45874f4.slice/crio-58e9c7e90a71a877fb2d46aa383f5ccce8b44bdc97ebb95f73997fad4665af06 WatchSource:0}: Error finding container 58e9c7e90a71a877fb2d46aa383f5ccce8b44bdc97ebb95f73997fad4665af06: Status 404 returned error can't find the container with id 58e9c7e90a71a877fb2d46aa383f5ccce8b44bdc97ebb95f73997fad4665af06 Nov 26 13:47:47 crc kubenswrapper[4695]: I1126 13:47:47.531228 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:47 crc kubenswrapper[4695]: W1126 13:47:47.539887 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8916bc6c_1aae_4ee1_bc89_9afb8d98f34d.slice/crio-5a1b97cf98e9a587ad228a4bbab2fb9c2d084e28e48d09b51fd81139b3ad289e WatchSource:0}: Error finding container 5a1b97cf98e9a587ad228a4bbab2fb9c2d084e28e48d09b51fd81139b3ad289e: Status 404 returned error can't find the container with id 5a1b97cf98e9a587ad228a4bbab2fb9c2d084e28e48d09b51fd81139b3ad289e Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.279420 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b694fa3-bda0-4522-bc11-61c47db527af","Type":"ContainerStarted","Data":"6ad350394f0074bfb10237b569b814fcba2a5f9efb73627b2da117376b5cd38c"} Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.280957 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw4xf" event={"ID":"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d","Type":"ContainerStarted","Data":"b3d4c2aa6ff6b4a554235f85b028ada38dc383de9a97066154c4046607ac0034"} Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.281004 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw4xf" event={"ID":"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d","Type":"ContainerStarted","Data":"5a1b97cf98e9a587ad228a4bbab2fb9c2d084e28e48d09b51fd81139b3ad289e"} Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.284517 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4","Type":"ContainerStarted","Data":"2d93eee3ec29195ae9262eba7019441dcfdb29e66931d29b5567181830c3ea90"} Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.284570 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4","Type":"ContainerStarted","Data":"2e6c3ad28ffd82ab0b0e5582a3373c089c4595022c50e440b1a633bc1bf2695c"} Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.284585 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4","Type":"ContainerStarted","Data":"58e9c7e90a71a877fb2d46aa383f5ccce8b44bdc97ebb95f73997fad4665af06"} Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.328403 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.328385394 podStartE2EDuration="2.328385394s" podCreationTimestamp="2025-11-26 13:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:48.324587924 +0000 UTC m=+1451.960413006" watchObservedRunningTime="2025-11-26 13:47:48.328385394 +0000 UTC m=+1451.964210476" Nov 26 13:47:48 crc kubenswrapper[4695]: I1126 13:47:48.332436 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-kw4xf" podStartSLOduration=2.332426083 podStartE2EDuration="2.332426083s" podCreationTimestamp="2025-11-26 13:47:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:48.303934077 +0000 UTC m=+1451.939759159" watchObservedRunningTime="2025-11-26 13:47:48.332426083 +0000 UTC m=+1451.968251165" Nov 26 13:47:49 crc kubenswrapper[4695]: I1126 13:47:49.295300 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b694fa3-bda0-4522-bc11-61c47db527af","Type":"ContainerStarted","Data":"4dab67294a5f04c1bad80200c68ee39a0c82d8e37a284a9a21750dc39cfb33b3"} Nov 26 13:47:49 crc kubenswrapper[4695]: I1126 13:47:49.914546 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:47:49 crc kubenswrapper[4695]: I1126 13:47:49.981752 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-m9rhq"] Nov 26 13:47:49 crc kubenswrapper[4695]: I1126 13:47:49.982028 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerName="dnsmasq-dns" containerID="cri-o://61ebddc8f32b92169a7656f5f728a9cb45a1aa4bb8c8a3fd96d67b76368938cc" gracePeriod=10 Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.138539 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.191:5353: connect: connection refused" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.327260 4695 generic.go:334] "Generic (PLEG): container finished" podID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerID="61ebddc8f32b92169a7656f5f728a9cb45a1aa4bb8c8a3fd96d67b76368938cc" exitCode=0 Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.327505 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" event={"ID":"3a64e394-eef4-4ade-bb3a-d41d2326b554","Type":"ContainerDied","Data":"61ebddc8f32b92169a7656f5f728a9cb45a1aa4bb8c8a3fd96d67b76368938cc"} Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.500881 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.534919 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-nb\") pod \"3a64e394-eef4-4ade-bb3a-d41d2326b554\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.535055 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-config\") pod \"3a64e394-eef4-4ade-bb3a-d41d2326b554\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.535106 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-swift-storage-0\") pod \"3a64e394-eef4-4ade-bb3a-d41d2326b554\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.535186 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s28ns\" (UniqueName: \"kubernetes.io/projected/3a64e394-eef4-4ade-bb3a-d41d2326b554-kube-api-access-s28ns\") pod \"3a64e394-eef4-4ade-bb3a-d41d2326b554\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.535225 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-sb\") pod \"3a64e394-eef4-4ade-bb3a-d41d2326b554\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.535265 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-svc\") pod \"3a64e394-eef4-4ade-bb3a-d41d2326b554\" (UID: \"3a64e394-eef4-4ade-bb3a-d41d2326b554\") " Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.540893 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a64e394-eef4-4ade-bb3a-d41d2326b554-kube-api-access-s28ns" (OuterVolumeSpecName: "kube-api-access-s28ns") pod "3a64e394-eef4-4ade-bb3a-d41d2326b554" (UID: "3a64e394-eef4-4ade-bb3a-d41d2326b554"). InnerVolumeSpecName "kube-api-access-s28ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.589226 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3a64e394-eef4-4ade-bb3a-d41d2326b554" (UID: "3a64e394-eef4-4ade-bb3a-d41d2326b554"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.596785 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3a64e394-eef4-4ade-bb3a-d41d2326b554" (UID: "3a64e394-eef4-4ade-bb3a-d41d2326b554"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.610599 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3a64e394-eef4-4ade-bb3a-d41d2326b554" (UID: "3a64e394-eef4-4ade-bb3a-d41d2326b554"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.628995 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-config" (OuterVolumeSpecName: "config") pod "3a64e394-eef4-4ade-bb3a-d41d2326b554" (UID: "3a64e394-eef4-4ade-bb3a-d41d2326b554"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.637920 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.637966 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.637980 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.637993 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s28ns\" (UniqueName: \"kubernetes.io/projected/3a64e394-eef4-4ade-bb3a-d41d2326b554-kube-api-access-s28ns\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.638005 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.639410 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3a64e394-eef4-4ade-bb3a-d41d2326b554" (UID: "3a64e394-eef4-4ade-bb3a-d41d2326b554"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:47:50 crc kubenswrapper[4695]: I1126 13:47:50.739433 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3a64e394-eef4-4ade-bb3a-d41d2326b554-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.337593 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" event={"ID":"3a64e394-eef4-4ade-bb3a-d41d2326b554","Type":"ContainerDied","Data":"7ed4ed2675db07044f962e68e7676681c7675880712793ff8ea9e37ba1cdef1e"} Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.337982 4695 scope.go:117] "RemoveContainer" containerID="61ebddc8f32b92169a7656f5f728a9cb45a1aa4bb8c8a3fd96d67b76368938cc" Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.337618 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-m9rhq" Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.345782 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b694fa3-bda0-4522-bc11-61c47db527af","Type":"ContainerStarted","Data":"9f856f5373382a92606ab613c5bc9a49d7a0a17424a39ca27f33cfa052debc78"} Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.347027 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.362407 4695 scope.go:117] "RemoveContainer" containerID="96197d190e2df690bcd2ad76d5fec7b0a40ee61b74ba272e400d5d80dc19e4c0" Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.396146 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.30590144 podStartE2EDuration="6.396127316s" podCreationTimestamp="2025-11-26 13:47:45 +0000 UTC" firstStartedPulling="2025-11-26 13:47:46.082251139 +0000 UTC m=+1449.718076221" lastFinishedPulling="2025-11-26 13:47:50.172477015 +0000 UTC m=+1453.808302097" observedRunningTime="2025-11-26 13:47:51.385976464 +0000 UTC m=+1455.021801556" watchObservedRunningTime="2025-11-26 13:47:51.396127316 +0000 UTC m=+1455.031952398" Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.413489 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-m9rhq"] Nov 26 13:47:51 crc kubenswrapper[4695]: I1126 13:47:51.420620 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-m9rhq"] Nov 26 13:47:53 crc kubenswrapper[4695]: I1126 13:47:53.177658 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" path="/var/lib/kubelet/pods/3a64e394-eef4-4ade-bb3a-d41d2326b554/volumes" Nov 26 13:47:53 crc kubenswrapper[4695]: I1126 13:47:53.372059 4695 generic.go:334] "Generic (PLEG): container finished" podID="8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" containerID="b3d4c2aa6ff6b4a554235f85b028ada38dc383de9a97066154c4046607ac0034" exitCode=0 Nov 26 13:47:53 crc kubenswrapper[4695]: I1126 13:47:53.372734 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw4xf" event={"ID":"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d","Type":"ContainerDied","Data":"b3d4c2aa6ff6b4a554235f85b028ada38dc383de9a97066154c4046607ac0034"} Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.853227 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.942105 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-scripts\") pod \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.942178 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-combined-ca-bundle\") pod \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.942411 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwjvp\" (UniqueName: \"kubernetes.io/projected/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-kube-api-access-rwjvp\") pod \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.942458 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-config-data\") pod \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\" (UID: \"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d\") " Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.948065 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-scripts" (OuterVolumeSpecName: "scripts") pod "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" (UID: "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.959821 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-kube-api-access-rwjvp" (OuterVolumeSpecName: "kube-api-access-rwjvp") pod "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" (UID: "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d"). InnerVolumeSpecName "kube-api-access-rwjvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.978437 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" (UID: "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:54 crc kubenswrapper[4695]: I1126 13:47:54.981016 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-config-data" (OuterVolumeSpecName: "config-data") pod "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" (UID: "8916bc6c-1aae-4ee1-bc89-9afb8d98f34d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.044629 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwjvp\" (UniqueName: \"kubernetes.io/projected/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-kube-api-access-rwjvp\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.044657 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.044666 4695 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-scripts\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.044675 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.391112 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kw4xf" event={"ID":"8916bc6c-1aae-4ee1-bc89-9afb8d98f34d","Type":"ContainerDied","Data":"5a1b97cf98e9a587ad228a4bbab2fb9c2d084e28e48d09b51fd81139b3ad289e"} Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.391176 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a1b97cf98e9a587ad228a4bbab2fb9c2d084e28e48d09b51fd81139b3ad289e" Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.391406 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kw4xf" Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.563113 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.563350 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-log" containerID="cri-o://2e6c3ad28ffd82ab0b0e5582a3373c089c4595022c50e440b1a633bc1bf2695c" gracePeriod=30 Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.563530 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-api" containerID="cri-o://2d93eee3ec29195ae9262eba7019441dcfdb29e66931d29b5567181830c3ea90" gracePeriod=30 Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.575339 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.582633 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f89f6929-5688-4695-a589-2103e1d61cac" containerName="nova-scheduler-scheduler" containerID="cri-o://ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" gracePeriod=30 Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.598574 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.599134 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-metadata" containerID="cri-o://296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226" gracePeriod=30 Nov 26 13:47:55 crc kubenswrapper[4695]: I1126 13:47:55.599070 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-log" containerID="cri-o://ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c" gracePeriod=30 Nov 26 13:47:56 crc kubenswrapper[4695]: E1126 13:47:56.216309 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e51ee79_d2dc_4d2a_b41e_e5d8b45874f4.slice/crio-2d93eee3ec29195ae9262eba7019441dcfdb29e66931d29b5567181830c3ea90.scope\": RecentStats: unable to find data in memory cache]" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.406989 4695 generic.go:334] "Generic (PLEG): container finished" podID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerID="ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c" exitCode=143 Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.407096 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7e4f67c-33d9-4198-9941-5bca025cc766","Type":"ContainerDied","Data":"ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c"} Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.415326 4695 generic.go:334] "Generic (PLEG): container finished" podID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerID="2d93eee3ec29195ae9262eba7019441dcfdb29e66931d29b5567181830c3ea90" exitCode=0 Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.415472 4695 generic.go:334] "Generic (PLEG): container finished" podID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerID="2e6c3ad28ffd82ab0b0e5582a3373c089c4595022c50e440b1a633bc1bf2695c" exitCode=143 Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.415506 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4","Type":"ContainerDied","Data":"2d93eee3ec29195ae9262eba7019441dcfdb29e66931d29b5567181830c3ea90"} Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.415535 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4","Type":"ContainerDied","Data":"2e6c3ad28ffd82ab0b0e5582a3373c089c4595022c50e440b1a633bc1bf2695c"} Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.541748 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.574500 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx7fl\" (UniqueName: \"kubernetes.io/projected/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-kube-api-access-mx7fl\") pod \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.574654 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-logs\") pod \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.574905 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-logs" (OuterVolumeSpecName: "logs") pod "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" (UID: "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.574980 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-internal-tls-certs\") pod \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.575010 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-combined-ca-bundle\") pod \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.575368 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-public-tls-certs\") pod \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.575439 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-config-data\") pod \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\" (UID: \"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4\") " Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.576452 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.583546 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-kube-api-access-mx7fl" (OuterVolumeSpecName: "kube-api-access-mx7fl") pod "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" (UID: "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4"). InnerVolumeSpecName "kube-api-access-mx7fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.618053 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-config-data" (OuterVolumeSpecName: "config-data") pod "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" (UID: "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.620625 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" (UID: "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.637772 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" (UID: "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.656313 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" (UID: "1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.678586 4695 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.678627 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.678640 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx7fl\" (UniqueName: \"kubernetes.io/projected/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-kube-api-access-mx7fl\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.678653 4695 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:56 crc kubenswrapper[4695]: I1126 13:47:56.678666 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.164610 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.166248 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.167974 4695 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.168046 4695 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f89f6929-5688-4695-a589-2103e1d61cac" containerName="nova-scheduler-scheduler" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.425747 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4","Type":"ContainerDied","Data":"58e9c7e90a71a877fb2d46aa383f5ccce8b44bdc97ebb95f73997fad4665af06"} Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.425804 4695 scope.go:117] "RemoveContainer" containerID="2d93eee3ec29195ae9262eba7019441dcfdb29e66931d29b5567181830c3ea90" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.426552 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.455216 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.463949 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473035 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.473497 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-log" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473518 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-log" Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.473536 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerName="dnsmasq-dns" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473544 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerName="dnsmasq-dns" Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.473568 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-api" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473576 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-api" Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.473595 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" containerName="nova-manage" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473602 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" containerName="nova-manage" Nov 26 13:47:57 crc kubenswrapper[4695]: E1126 13:47:57.473626 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerName="init" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473633 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerName="init" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473888 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-api" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473916 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" containerName="nova-api-log" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473928 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" containerName="nova-manage" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.473944 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a64e394-eef4-4ade-bb3a-d41d2326b554" containerName="dnsmasq-dns" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.475303 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.479510 4695 scope.go:117] "RemoveContainer" containerID="2e6c3ad28ffd82ab0b0e5582a3373c089c4595022c50e440b1a633bc1bf2695c" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.483372 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.483507 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.484141 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.488008 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.605388 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.605483 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-config-data\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.605532 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-logs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.605581 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.605751 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.605847 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzfbl\" (UniqueName: \"kubernetes.io/projected/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-kube-api-access-fzfbl\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.707644 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzfbl\" (UniqueName: \"kubernetes.io/projected/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-kube-api-access-fzfbl\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.707721 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.707788 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-config-data\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.707827 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-logs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.707869 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.707894 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.708393 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-logs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.713434 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.713558 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.714302 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.721565 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-config-data\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.724197 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzfbl\" (UniqueName: \"kubernetes.io/projected/6c2fff62-d355-4448-a2f8-a2d9f5c13e9a-kube-api-access-fzfbl\") pod \"nova-api-0\" (UID: \"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a\") " pod="openstack/nova-api-0" Nov 26 13:47:57 crc kubenswrapper[4695]: I1126 13:47:57.800117 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 26 13:47:58 crc kubenswrapper[4695]: I1126 13:47:58.292788 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 26 13:47:58 crc kubenswrapper[4695]: I1126 13:47:58.435986 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a","Type":"ContainerStarted","Data":"cc3382c2e31751f6cc35fb5ab20ca43cdca83711c2cf989968d3d335e9a7bfb4"} Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.176818 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4" path="/var/lib/kubelet/pods/1e51ee79-d2dc-4d2a-b41e-e5d8b45874f4/volumes" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.186690 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.232893 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-nova-metadata-tls-certs\") pod \"e7e4f67c-33d9-4198-9941-5bca025cc766\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.232956 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-config-data\") pod \"e7e4f67c-33d9-4198-9941-5bca025cc766\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.233090 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ss6l\" (UniqueName: \"kubernetes.io/projected/e7e4f67c-33d9-4198-9941-5bca025cc766-kube-api-access-8ss6l\") pod \"e7e4f67c-33d9-4198-9941-5bca025cc766\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.233151 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-combined-ca-bundle\") pod \"e7e4f67c-33d9-4198-9941-5bca025cc766\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.233238 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e4f67c-33d9-4198-9941-5bca025cc766-logs\") pod \"e7e4f67c-33d9-4198-9941-5bca025cc766\" (UID: \"e7e4f67c-33d9-4198-9941-5bca025cc766\") " Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.234088 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7e4f67c-33d9-4198-9941-5bca025cc766-logs" (OuterVolumeSpecName: "logs") pod "e7e4f67c-33d9-4198-9941-5bca025cc766" (UID: "e7e4f67c-33d9-4198-9941-5bca025cc766"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.237560 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e4f67c-33d9-4198-9941-5bca025cc766-kube-api-access-8ss6l" (OuterVolumeSpecName: "kube-api-access-8ss6l") pod "e7e4f67c-33d9-4198-9941-5bca025cc766" (UID: "e7e4f67c-33d9-4198-9941-5bca025cc766"). InnerVolumeSpecName "kube-api-access-8ss6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.259853 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7e4f67c-33d9-4198-9941-5bca025cc766" (UID: "e7e4f67c-33d9-4198-9941-5bca025cc766"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.280538 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-config-data" (OuterVolumeSpecName: "config-data") pod "e7e4f67c-33d9-4198-9941-5bca025cc766" (UID: "e7e4f67c-33d9-4198-9941-5bca025cc766"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.282735 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "e7e4f67c-33d9-4198-9941-5bca025cc766" (UID: "e7e4f67c-33d9-4198-9941-5bca025cc766"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.335590 4695 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7e4f67c-33d9-4198-9941-5bca025cc766-logs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.335621 4695 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.335632 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.335641 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ss6l\" (UniqueName: \"kubernetes.io/projected/e7e4f67c-33d9-4198-9941-5bca025cc766-kube-api-access-8ss6l\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.335650 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7e4f67c-33d9-4198-9941-5bca025cc766-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.456783 4695 generic.go:334] "Generic (PLEG): container finished" podID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerID="296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226" exitCode=0 Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.456826 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.456854 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7e4f67c-33d9-4198-9941-5bca025cc766","Type":"ContainerDied","Data":"296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226"} Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.457296 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7e4f67c-33d9-4198-9941-5bca025cc766","Type":"ContainerDied","Data":"eb2bf12cd897add7e33140f7b45aaa88d26121b897dab7b256029b712292d9c5"} Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.457319 4695 scope.go:117] "RemoveContainer" containerID="296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.460588 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a","Type":"ContainerStarted","Data":"05ba3ae1dffaff59c7fad4d6b7b95e11b5cd70d7367eb73dc38efce78abcb4b4"} Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.460647 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c2fff62-d355-4448-a2f8-a2d9f5c13e9a","Type":"ContainerStarted","Data":"e45975db2fdbfea28ce85f097ee6ac33dc7fc9bfc96c0f095b54e1820febfe40"} Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.486746 4695 scope.go:117] "RemoveContainer" containerID="ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.492886 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.492860239 podStartE2EDuration="2.492860239s" podCreationTimestamp="2025-11-26 13:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:47:59.476954464 +0000 UTC m=+1463.112779566" watchObservedRunningTime="2025-11-26 13:47:59.492860239 +0000 UTC m=+1463.128685321" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.507636 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.511968 4695 scope.go:117] "RemoveContainer" containerID="296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226" Nov 26 13:47:59 crc kubenswrapper[4695]: E1126 13:47:59.515786 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226\": container with ID starting with 296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226 not found: ID does not exist" containerID="296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.515833 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226"} err="failed to get container status \"296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226\": rpc error: code = NotFound desc = could not find container \"296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226\": container with ID starting with 296b34f61333aeb55e9907e7b74f3e97c9deeaadd1983e991c686a178f452226 not found: ID does not exist" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.515861 4695 scope.go:117] "RemoveContainer" containerID="ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c" Nov 26 13:47:59 crc kubenswrapper[4695]: E1126 13:47:59.516239 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c\": container with ID starting with ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c not found: ID does not exist" containerID="ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.516284 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c"} err="failed to get container status \"ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c\": rpc error: code = NotFound desc = could not find container \"ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c\": container with ID starting with ca71d0ce4f1b7557813e8a314b6d6d76906f21419b0a906d19a9bf889cce157c not found: ID does not exist" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.517466 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.528104 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:59 crc kubenswrapper[4695]: E1126 13:47:59.528672 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-log" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.528687 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-log" Nov 26 13:47:59 crc kubenswrapper[4695]: E1126 13:47:59.528704 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-metadata" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.528711 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-metadata" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.528874 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-log" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.528891 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" containerName="nova-metadata-metadata" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.529965 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.532584 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.535142 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.537380 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.640024 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.640301 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-config-data\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.640427 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.640625 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbwrh\" (UniqueName: \"kubernetes.io/projected/ec2ef622-e87b-4dde-a1ca-81496cfd3562-kube-api-access-lbwrh\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.640705 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec2ef622-e87b-4dde-a1ca-81496cfd3562-logs\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.742787 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbwrh\" (UniqueName: \"kubernetes.io/projected/ec2ef622-e87b-4dde-a1ca-81496cfd3562-kube-api-access-lbwrh\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.742868 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec2ef622-e87b-4dde-a1ca-81496cfd3562-logs\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.742927 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.743000 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-config-data\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.743045 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.752896 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec2ef622-e87b-4dde-a1ca-81496cfd3562-logs\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.757519 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.759616 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-config-data\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.760834 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec2ef622-e87b-4dde-a1ca-81496cfd3562-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.780603 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbwrh\" (UniqueName: \"kubernetes.io/projected/ec2ef622-e87b-4dde-a1ca-81496cfd3562-kube-api-access-lbwrh\") pod \"nova-metadata-0\" (UID: \"ec2ef622-e87b-4dde-a1ca-81496cfd3562\") " pod="openstack/nova-metadata-0" Nov 26 13:47:59 crc kubenswrapper[4695]: I1126 13:47:59.858749 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 26 13:48:00 crc kubenswrapper[4695]: I1126 13:48:00.330295 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 26 13:48:00 crc kubenswrapper[4695]: I1126 13:48:00.474471 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec2ef622-e87b-4dde-a1ca-81496cfd3562","Type":"ContainerStarted","Data":"1c0bf3081323ca307272b7c4e7d3508170509046856f0a6478dd06eca9a160a6"} Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.176047 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e4f67c-33d9-4198-9941-5bca025cc766" path="/var/lib/kubelet/pods/e7e4f67c-33d9-4198-9941-5bca025cc766/volumes" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.338019 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.409657 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-combined-ca-bundle\") pod \"f89f6929-5688-4695-a589-2103e1d61cac\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.409707 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7wp4\" (UniqueName: \"kubernetes.io/projected/f89f6929-5688-4695-a589-2103e1d61cac-kube-api-access-d7wp4\") pod \"f89f6929-5688-4695-a589-2103e1d61cac\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.409811 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-config-data\") pod \"f89f6929-5688-4695-a589-2103e1d61cac\" (UID: \"f89f6929-5688-4695-a589-2103e1d61cac\") " Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.424531 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f89f6929-5688-4695-a589-2103e1d61cac-kube-api-access-d7wp4" (OuterVolumeSpecName: "kube-api-access-d7wp4") pod "f89f6929-5688-4695-a589-2103e1d61cac" (UID: "f89f6929-5688-4695-a589-2103e1d61cac"). InnerVolumeSpecName "kube-api-access-d7wp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.453490 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f89f6929-5688-4695-a589-2103e1d61cac" (UID: "f89f6929-5688-4695-a589-2103e1d61cac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.456741 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-config-data" (OuterVolumeSpecName: "config-data") pod "f89f6929-5688-4695-a589-2103e1d61cac" (UID: "f89f6929-5688-4695-a589-2103e1d61cac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.490439 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec2ef622-e87b-4dde-a1ca-81496cfd3562","Type":"ContainerStarted","Data":"c21a584b0312638f82535359ba4d364d6035c3dd4feceb98d4cd129180124b1c"} Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.490506 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"ec2ef622-e87b-4dde-a1ca-81496cfd3562","Type":"ContainerStarted","Data":"dd331e452a5e425361dc392c4909621522c99d848ec35133e05c0f61466dacd3"} Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.494651 4695 generic.go:334] "Generic (PLEG): container finished" podID="f89f6929-5688-4695-a589-2103e1d61cac" containerID="ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" exitCode=0 Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.494696 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.494710 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89f6929-5688-4695-a589-2103e1d61cac","Type":"ContainerDied","Data":"ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23"} Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.495150 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f89f6929-5688-4695-a589-2103e1d61cac","Type":"ContainerDied","Data":"95ffa48e09d1cb20fd9f6e505b56eb7b6c94a43084ed2c7168d84bfecda4d8c3"} Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.495190 4695 scope.go:117] "RemoveContainer" containerID="ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.513337 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.513383 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f89f6929-5688-4695-a589-2103e1d61cac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.513396 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7wp4\" (UniqueName: \"kubernetes.io/projected/f89f6929-5688-4695-a589-2103e1d61cac-kube-api-access-d7wp4\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.523990 4695 scope.go:117] "RemoveContainer" containerID="ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.524381 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.524369181 podStartE2EDuration="2.524369181s" podCreationTimestamp="2025-11-26 13:47:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:48:01.51238888 +0000 UTC m=+1465.148213962" watchObservedRunningTime="2025-11-26 13:48:01.524369181 +0000 UTC m=+1465.160194273" Nov 26 13:48:01 crc kubenswrapper[4695]: E1126 13:48:01.525894 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23\": container with ID starting with ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23 not found: ID does not exist" containerID="ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.525962 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23"} err="failed to get container status \"ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23\": rpc error: code = NotFound desc = could not find container \"ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23\": container with ID starting with ebe0508329bbeaaade1ad6fab07c5e36442477dbe4b75211e75b349053745d23 not found: ID does not exist" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.550222 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.561610 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.569329 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:48:01 crc kubenswrapper[4695]: E1126 13:48:01.569868 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f89f6929-5688-4695-a589-2103e1d61cac" containerName="nova-scheduler-scheduler" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.569889 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f89f6929-5688-4695-a589-2103e1d61cac" containerName="nova-scheduler-scheduler" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.570078 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f89f6929-5688-4695-a589-2103e1d61cac" containerName="nova-scheduler-scheduler" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.570884 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.573341 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.578155 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.716427 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-config-data\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.716473 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9z6\" (UniqueName: \"kubernetes.io/projected/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-kube-api-access-5q9z6\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.716570 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.818639 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.818758 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-config-data\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.818783 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9z6\" (UniqueName: \"kubernetes.io/projected/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-kube-api-access-5q9z6\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.823855 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.825848 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-config-data\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.839556 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9z6\" (UniqueName: \"kubernetes.io/projected/4d67e67d-ff4c-46e4-b0af-9eb1c017bf46-kube-api-access-5q9z6\") pod \"nova-scheduler-0\" (UID: \"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46\") " pod="openstack/nova-scheduler-0" Nov 26 13:48:01 crc kubenswrapper[4695]: I1126 13:48:01.884938 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 26 13:48:02 crc kubenswrapper[4695]: I1126 13:48:02.319200 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 26 13:48:02 crc kubenswrapper[4695]: W1126 13:48:02.325158 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d67e67d_ff4c_46e4_b0af_9eb1c017bf46.slice/crio-875300b533b91e653ea5589728b2739e284bd7db646e0f5bb87c80fcfd7bbbec WatchSource:0}: Error finding container 875300b533b91e653ea5589728b2739e284bd7db646e0f5bb87c80fcfd7bbbec: Status 404 returned error can't find the container with id 875300b533b91e653ea5589728b2739e284bd7db646e0f5bb87c80fcfd7bbbec Nov 26 13:48:02 crc kubenswrapper[4695]: I1126 13:48:02.520638 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46","Type":"ContainerStarted","Data":"875300b533b91e653ea5589728b2739e284bd7db646e0f5bb87c80fcfd7bbbec"} Nov 26 13:48:03 crc kubenswrapper[4695]: I1126 13:48:03.180218 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f89f6929-5688-4695-a589-2103e1d61cac" path="/var/lib/kubelet/pods/f89f6929-5688-4695-a589-2103e1d61cac/volumes" Nov 26 13:48:03 crc kubenswrapper[4695]: I1126 13:48:03.533105 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4d67e67d-ff4c-46e4-b0af-9eb1c017bf46","Type":"ContainerStarted","Data":"c63224763a888188da636dff8760e83d525e472d597a6c708fffc50c84e691a7"} Nov 26 13:48:04 crc kubenswrapper[4695]: I1126 13:48:04.859122 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 13:48:04 crc kubenswrapper[4695]: I1126 13:48:04.859172 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 26 13:48:06 crc kubenswrapper[4695]: I1126 13:48:06.885906 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 26 13:48:07 crc kubenswrapper[4695]: I1126 13:48:07.801361 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 13:48:07 crc kubenswrapper[4695]: I1126 13:48:07.801413 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 26 13:48:08 crc kubenswrapper[4695]: I1126 13:48:08.817564 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c2fff62-d355-4448-a2f8-a2d9f5c13e9a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 13:48:08 crc kubenswrapper[4695]: I1126 13:48:08.817565 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c2fff62-d355-4448-a2f8-a2d9f5c13e9a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 13:48:09 crc kubenswrapper[4695]: I1126 13:48:09.859281 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 13:48:09 crc kubenswrapper[4695]: I1126 13:48:09.859737 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 26 13:48:10 crc kubenswrapper[4695]: I1126 13:48:10.871492 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec2ef622-e87b-4dde-a1ca-81496cfd3562" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 13:48:10 crc kubenswrapper[4695]: I1126 13:48:10.871500 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="ec2ef622-e87b-4dde-a1ca-81496cfd3562" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.206:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 26 13:48:11 crc kubenswrapper[4695]: I1126 13:48:11.886278 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 26 13:48:11 crc kubenswrapper[4695]: I1126 13:48:11.942894 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 26 13:48:11 crc kubenswrapper[4695]: I1126 13:48:11.957983 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=10.957963945 podStartE2EDuration="10.957963945s" podCreationTimestamp="2025-11-26 13:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:48:03.55895829 +0000 UTC m=+1467.194783372" watchObservedRunningTime="2025-11-26 13:48:11.957963945 +0000 UTC m=+1475.593789037" Nov 26 13:48:12 crc kubenswrapper[4695]: I1126 13:48:12.641219 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 26 13:48:15 crc kubenswrapper[4695]: I1126 13:48:15.600120 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 26 13:48:17 crc kubenswrapper[4695]: I1126 13:48:17.810650 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 13:48:17 crc kubenswrapper[4695]: I1126 13:48:17.811271 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 13:48:17 crc kubenswrapper[4695]: I1126 13:48:17.811445 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 26 13:48:17 crc kubenswrapper[4695]: I1126 13:48:17.819987 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 13:48:18 crc kubenswrapper[4695]: I1126 13:48:18.703002 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 26 13:48:18 crc kubenswrapper[4695]: I1126 13:48:18.710471 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 26 13:48:19 crc kubenswrapper[4695]: I1126 13:48:19.866872 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 13:48:19 crc kubenswrapper[4695]: I1126 13:48:19.867854 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 26 13:48:19 crc kubenswrapper[4695]: I1126 13:48:19.872680 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 13:48:20 crc kubenswrapper[4695]: I1126 13:48:20.729477 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 26 13:48:29 crc kubenswrapper[4695]: I1126 13:48:29.262729 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:48:30 crc kubenswrapper[4695]: I1126 13:48:30.103848 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:48:33 crc kubenswrapper[4695]: I1126 13:48:33.187062 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerName="rabbitmq" containerID="cri-o://7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3" gracePeriod=604797 Nov 26 13:48:33 crc kubenswrapper[4695]: I1126 13:48:33.578573 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 26 13:48:33 crc kubenswrapper[4695]: I1126 13:48:33.925461 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerName="rabbitmq" containerID="cri-o://bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c" gracePeriod=604797 Nov 26 13:48:33 crc kubenswrapper[4695]: I1126 13:48:33.928778 4695 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.753416 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829567 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-plugins-conf\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829731 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-plugins\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829763 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27495d77-50c6-4476-86c3-dafb0e5dbb97-erlang-cookie-secret\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829787 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-confd\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829855 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-config-data\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829889 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27495d77-50c6-4476-86c3-dafb0e5dbb97-pod-info\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829910 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-tls\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829960 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.829998 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbvz4\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-kube-api-access-dbvz4\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.830042 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-erlang-cookie\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.830084 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-server-conf\") pod \"27495d77-50c6-4476-86c3-dafb0e5dbb97\" (UID: \"27495d77-50c6-4476-86c3-dafb0e5dbb97\") " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.830119 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.830614 4695 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.834952 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.835385 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.839986 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.841599 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27495d77-50c6-4476-86c3-dafb0e5dbb97-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.846595 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/27495d77-50c6-4476-86c3-dafb0e5dbb97-pod-info" (OuterVolumeSpecName: "pod-info") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.850619 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-kube-api-access-dbvz4" (OuterVolumeSpecName: "kube-api-access-dbvz4") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "kube-api-access-dbvz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.890364 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-config-data" (OuterVolumeSpecName: "config-data") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.893606 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933016 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-server-conf" (OuterVolumeSpecName: "server-conf") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933808 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933837 4695 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/27495d77-50c6-4476-86c3-dafb0e5dbb97-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933847 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933855 4695 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/27495d77-50c6-4476-86c3-dafb0e5dbb97-pod-info\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933863 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933884 4695 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933893 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbvz4\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-kube-api-access-dbvz4\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933903 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.933911 4695 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/27495d77-50c6-4476-86c3-dafb0e5dbb97-server-conf\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.946525 4695 generic.go:334] "Generic (PLEG): container finished" podID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerID="7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3" exitCode=0 Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.946578 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27495d77-50c6-4476-86c3-dafb0e5dbb97","Type":"ContainerDied","Data":"7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3"} Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.946603 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"27495d77-50c6-4476-86c3-dafb0e5dbb97","Type":"ContainerDied","Data":"fe6b19585934a298bdfe08e7003ca109bf968f655514a892f2e422361bb901d5"} Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.946621 4695 scope.go:117] "RemoveContainer" containerID="7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.946759 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.963982 4695 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.985614 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "27495d77-50c6-4476-86c3-dafb0e5dbb97" (UID: "27495d77-50c6-4476-86c3-dafb0e5dbb97"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:39 crc kubenswrapper[4695]: I1126 13:48:39.997489 4695 scope.go:117] "RemoveContainer" containerID="cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.035734 4695 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.035765 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/27495d77-50c6-4476-86c3-dafb0e5dbb97-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.037311 4695 scope.go:117] "RemoveContainer" containerID="7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3" Nov 26 13:48:40 crc kubenswrapper[4695]: E1126 13:48:40.038516 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3\": container with ID starting with 7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3 not found: ID does not exist" containerID="7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.038554 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3"} err="failed to get container status \"7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3\": rpc error: code = NotFound desc = could not find container \"7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3\": container with ID starting with 7b3b51a7d8ad407d41aabbd840de5b22b7f13c9186d0342681f6905c67d93fe3 not found: ID does not exist" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.038576 4695 scope.go:117] "RemoveContainer" containerID="cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854" Nov 26 13:48:40 crc kubenswrapper[4695]: E1126 13:48:40.038888 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854\": container with ID starting with cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854 not found: ID does not exist" containerID="cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.038927 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854"} err="failed to get container status \"cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854\": rpc error: code = NotFound desc = could not find container \"cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854\": container with ID starting with cea47fbc0f5efa067bd4cedd103a38efdfd6346a93bd7851ef281ade37b78854 not found: ID does not exist" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.293635 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.307169 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.314941 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:48:40 crc kubenswrapper[4695]: E1126 13:48:40.315328 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerName="setup-container" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.315381 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerName="setup-container" Nov 26 13:48:40 crc kubenswrapper[4695]: E1126 13:48:40.315426 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerName="rabbitmq" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.315435 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerName="rabbitmq" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.318465 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" containerName="rabbitmq" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.319507 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.325717 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.325780 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.325792 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.325849 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.325917 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-5rpqq" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.325803 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.326654 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.338407 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.445834 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.445887 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/51da5818-5d05-4f99-84a7-93eae660a8a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.445919 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446207 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/51da5818-5d05-4f99-84a7-93eae660a8a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446273 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng4td\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-kube-api-access-ng4td\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446329 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446530 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446592 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446623 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446704 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.446881 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.523995 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.549871 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.549933 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.549980 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/51da5818-5d05-4f99-84a7-93eae660a8a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.549995 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.550019 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.550098 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/51da5818-5d05-4f99-84a7-93eae660a8a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.550126 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng4td\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-kube-api-access-ng4td\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.550146 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.550162 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.550189 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.550217 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.551097 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.551175 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.551414 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.551994 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.552161 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.552204 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51da5818-5d05-4f99-84a7-93eae660a8a7-config-data\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.557288 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/51da5818-5d05-4f99-84a7-93eae660a8a7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.559054 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.559798 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/51da5818-5d05-4f99-84a7-93eae660a8a7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.571871 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng4td\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-kube-api-access-ng4td\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.573104 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/51da5818-5d05-4f99-84a7-93eae660a8a7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.588531 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"51da5818-5d05-4f99-84a7-93eae660a8a7\") " pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651331 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-config-data\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651406 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651471 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a771ad6-98a9-474e-83f0-e17fecdee9be-pod-info\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651595 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-plugins\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651648 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-server-conf\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651667 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a771ad6-98a9-474e-83f0-e17fecdee9be-erlang-cookie-secret\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651706 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgk6r\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-kube-api-access-zgk6r\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651752 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-plugins-conf\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651788 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-erlang-cookie\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651825 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-tls\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.651889 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-confd\") pod \"6a771ad6-98a9-474e-83f0-e17fecdee9be\" (UID: \"6a771ad6-98a9-474e-83f0-e17fecdee9be\") " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.653993 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.654084 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.655132 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.657797 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.658827 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.659415 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6a771ad6-98a9-474e-83f0-e17fecdee9be-pod-info" (OuterVolumeSpecName: "pod-info") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.659449 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a771ad6-98a9-474e-83f0-e17fecdee9be-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.661070 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-kube-api-access-zgk6r" (OuterVolumeSpecName: "kube-api-access-zgk6r") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "kube-api-access-zgk6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.677797 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.682816 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-config-data" (OuterVolumeSpecName: "config-data") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.705370 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-server-conf" (OuterVolumeSpecName: "server-conf") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753601 4695 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6a771ad6-98a9-474e-83f0-e17fecdee9be-pod-info\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753848 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753857 4695 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-server-conf\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753865 4695 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6a771ad6-98a9-474e-83f0-e17fecdee9be-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753874 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgk6r\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-kube-api-access-zgk6r\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753882 4695 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753890 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753900 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753910 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a771ad6-98a9-474e-83f0-e17fecdee9be-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.753934 4695 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.771450 4695 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.775533 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6a771ad6-98a9-474e-83f0-e17fecdee9be" (UID: "6a771ad6-98a9-474e-83f0-e17fecdee9be"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.861280 4695 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6a771ad6-98a9-474e-83f0-e17fecdee9be-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.861325 4695 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.961480 4695 generic.go:334] "Generic (PLEG): container finished" podID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerID="bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c" exitCode=0 Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.961527 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a771ad6-98a9-474e-83f0-e17fecdee9be","Type":"ContainerDied","Data":"bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c"} Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.961553 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"6a771ad6-98a9-474e-83f0-e17fecdee9be","Type":"ContainerDied","Data":"9815c367d19f282af6d911c6d924611abbb3c3900864989705eaeb4fd861b3af"} Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.961569 4695 scope.go:117] "RemoveContainer" containerID="bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.961673 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.992235 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:48:40 crc kubenswrapper[4695]: I1126 13:48:40.999749 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.003596 4695 scope.go:117] "RemoveContainer" containerID="67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.016235 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:48:41 crc kubenswrapper[4695]: E1126 13:48:41.016705 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerName="rabbitmq" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.016723 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerName="rabbitmq" Nov 26 13:48:41 crc kubenswrapper[4695]: E1126 13:48:41.016733 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerName="setup-container" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.016740 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerName="setup-container" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.016939 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" containerName="rabbitmq" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.017842 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.019704 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-ws9b2" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.023666 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.024717 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.025537 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.025891 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.026018 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.030188 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.038519 4695 scope.go:117] "RemoveContainer" containerID="bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.040740 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:48:41 crc kubenswrapper[4695]: E1126 13:48:41.041612 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c\": container with ID starting with bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c not found: ID does not exist" containerID="bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.041679 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c"} err="failed to get container status \"bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c\": rpc error: code = NotFound desc = could not find container \"bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c\": container with ID starting with bce2ea5e8374260d80b951109cc6cd67c1020f1abe330a9ad686b49c7e73d96c not found: ID does not exist" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.041712 4695 scope.go:117] "RemoveContainer" containerID="67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93" Nov 26 13:48:41 crc kubenswrapper[4695]: E1126 13:48:41.044044 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93\": container with ID starting with 67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93 not found: ID does not exist" containerID="67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.044104 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93"} err="failed to get container status \"67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93\": rpc error: code = NotFound desc = could not find container \"67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93\": container with ID starting with 67957b39ec669f2404c3a56341467091108db51be7df2b81912523d6ca960a93 not found: ID does not exist" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.067628 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.067819 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6d6q\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-kube-api-access-g6d6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.067864 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.067904 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.067946 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.067979 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.068003 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.068041 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.068073 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.068111 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.068150 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: W1126 13:48:41.170650 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51da5818_5d05_4f99_84a7_93eae660a8a7.slice/crio-7e4cf1584287e5177872bb198597abf3ccfc86937e0fe0aae7c17f338b356e93 WatchSource:0}: Error finding container 7e4cf1584287e5177872bb198597abf3ccfc86937e0fe0aae7c17f338b356e93: Status 404 returned error can't find the container with id 7e4cf1584287e5177872bb198597abf3ccfc86937e0fe0aae7c17f338b356e93 Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.170986 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171058 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171093 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171127 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171171 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171211 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171248 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171296 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171387 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171512 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6d6q\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-kube-api-access-g6d6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171566 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.171581 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.172794 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.172960 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.173300 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.177330 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.181818 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.182999 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.183022 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.183089 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27495d77-50c6-4476-86c3-dafb0e5dbb97" path="/var/lib/kubelet/pods/27495d77-50c6-4476-86c3-dafb0e5dbb97/volumes" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.183440 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.183471 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.184740 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a771ad6-98a9-474e-83f0-e17fecdee9be" path="/var/lib/kubelet/pods/6a771ad6-98a9-474e-83f0-e17fecdee9be/volumes" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.185705 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.200891 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6d6q\" (UniqueName: \"kubernetes.io/projected/e7335d8e-0d9a-4532-9f5b-d91cafe38ca7-kube-api-access-g6d6q\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.228937 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7\") " pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.350818 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:48:41 crc kubenswrapper[4695]: W1126 13:48:41.660712 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7335d8e_0d9a_4532_9f5b_d91cafe38ca7.slice/crio-013feca604c123bde41baeacbec9bb74b7f0df70cad32c8e92288504e4e68cea WatchSource:0}: Error finding container 013feca604c123bde41baeacbec9bb74b7f0df70cad32c8e92288504e4e68cea: Status 404 returned error can't find the container with id 013feca604c123bde41baeacbec9bb74b7f0df70cad32c8e92288504e4e68cea Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.669264 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.973632 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7","Type":"ContainerStarted","Data":"013feca604c123bde41baeacbec9bb74b7f0df70cad32c8e92288504e4e68cea"} Nov 26 13:48:41 crc kubenswrapper[4695]: I1126 13:48:41.974731 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"51da5818-5d05-4f99-84a7-93eae660a8a7","Type":"ContainerStarted","Data":"7e4cf1584287e5177872bb198597abf3ccfc86937e0fe0aae7c17f338b356e93"} Nov 26 13:48:42 crc kubenswrapper[4695]: I1126 13:48:42.988115 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"51da5818-5d05-4f99-84a7-93eae660a8a7","Type":"ContainerStarted","Data":"5a487de214a43bb6c188f5238c1d25e53683edcacc1d2917014d10b8b11eb602"} Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.621383 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-xww4t"] Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.623706 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.627005 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.637485 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-xww4t"] Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.816909 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-config\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.817019 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.817077 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-svc\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.817423 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.817488 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xstc6\" (UniqueName: \"kubernetes.io/projected/b529f99b-51ee-4d66-89de-008ed26d474f-kube-api-access-xstc6\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.817618 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.817650 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.919920 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.919973 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.920015 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-config\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.920044 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.920066 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-svc\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.920164 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.920185 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xstc6\" (UniqueName: \"kubernetes.io/projected/b529f99b-51ee-4d66-89de-008ed26d474f-kube-api-access-xstc6\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.920803 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.921230 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.921579 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.921827 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.922059 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-config\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.922255 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-svc\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.938570 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xstc6\" (UniqueName: \"kubernetes.io/projected/b529f99b-51ee-4d66-89de-008ed26d474f-kube-api-access-xstc6\") pod \"dnsmasq-dns-5576978c7c-xww4t\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.964495 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:43 crc kubenswrapper[4695]: I1126 13:48:43.995422 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7","Type":"ContainerStarted","Data":"cd795a39cb9d744fb6c4f216fbc6eae04cdc0e1e9194b6d7e30ee4f38e444c7e"} Nov 26 13:48:44 crc kubenswrapper[4695]: I1126 13:48:44.438943 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-xww4t"] Nov 26 13:48:45 crc kubenswrapper[4695]: I1126 13:48:45.004421 4695 generic.go:334] "Generic (PLEG): container finished" podID="b529f99b-51ee-4d66-89de-008ed26d474f" containerID="6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852" exitCode=0 Nov 26 13:48:45 crc kubenswrapper[4695]: I1126 13:48:45.004522 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" event={"ID":"b529f99b-51ee-4d66-89de-008ed26d474f","Type":"ContainerDied","Data":"6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852"} Nov 26 13:48:45 crc kubenswrapper[4695]: I1126 13:48:45.004981 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" event={"ID":"b529f99b-51ee-4d66-89de-008ed26d474f","Type":"ContainerStarted","Data":"529332013b26067108165dbcf41c1f47547bac74786b7a16354f56071b226427"} Nov 26 13:48:46 crc kubenswrapper[4695]: I1126 13:48:46.016325 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" event={"ID":"b529f99b-51ee-4d66-89de-008ed26d474f","Type":"ContainerStarted","Data":"66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9"} Nov 26 13:48:46 crc kubenswrapper[4695]: I1126 13:48:46.016802 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:46 crc kubenswrapper[4695]: I1126 13:48:46.038281 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" podStartSLOduration=3.03826262 podStartE2EDuration="3.03826262s" podCreationTimestamp="2025-11-26 13:48:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:48:46.032160996 +0000 UTC m=+1509.667986098" watchObservedRunningTime="2025-11-26 13:48:46.03826262 +0000 UTC m=+1509.674087702" Nov 26 13:48:53 crc kubenswrapper[4695]: I1126 13:48:53.966566 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.026429 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-vh7hj"] Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.026711 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" podUID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerName="dnsmasq-dns" containerID="cri-o://8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2" gracePeriod=10 Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.166839 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-pblxz"] Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.168813 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.186295 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-pblxz"] Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.243209 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.243270 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.243292 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.243690 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-config\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.243742 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.243783 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgwz5\" (UniqueName: \"kubernetes.io/projected/86275742-143a-41e5-8029-aa251663c12e-kube-api-access-pgwz5\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.243870 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.345142 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.345489 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.345510 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.345592 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-config\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.345611 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.345634 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgwz5\" (UniqueName: \"kubernetes.io/projected/86275742-143a-41e5-8029-aa251663c12e-kube-api-access-pgwz5\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.345668 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.346015 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.346215 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.346695 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-config\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.346778 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.347334 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.347636 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/86275742-143a-41e5-8029-aa251663c12e-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.369386 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgwz5\" (UniqueName: \"kubernetes.io/projected/86275742-143a-41e5-8029-aa251663c12e-kube-api-access-pgwz5\") pod \"dnsmasq-dns-8c6f6df99-pblxz\" (UID: \"86275742-143a-41e5-8029-aa251663c12e\") " pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.517102 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.645824 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.752536 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-nb\") pod \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.752649 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-svc\") pod \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.752788 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gsmg\" (UniqueName: \"kubernetes.io/projected/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-kube-api-access-4gsmg\") pod \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.752868 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-sb\") pod \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.752926 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-config\") pod \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.752969 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-swift-storage-0\") pod \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\" (UID: \"2229721b-1c0b-4ebb-b51c-8e74fc407cbe\") " Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.757725 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-kube-api-access-4gsmg" (OuterVolumeSpecName: "kube-api-access-4gsmg") pod "2229721b-1c0b-4ebb-b51c-8e74fc407cbe" (UID: "2229721b-1c0b-4ebb-b51c-8e74fc407cbe"). InnerVolumeSpecName "kube-api-access-4gsmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.822624 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2229721b-1c0b-4ebb-b51c-8e74fc407cbe" (UID: "2229721b-1c0b-4ebb-b51c-8e74fc407cbe"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.824100 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2229721b-1c0b-4ebb-b51c-8e74fc407cbe" (UID: "2229721b-1c0b-4ebb-b51c-8e74fc407cbe"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.824955 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2229721b-1c0b-4ebb-b51c-8e74fc407cbe" (UID: "2229721b-1c0b-4ebb-b51c-8e74fc407cbe"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.827731 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2229721b-1c0b-4ebb-b51c-8e74fc407cbe" (UID: "2229721b-1c0b-4ebb-b51c-8e74fc407cbe"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.847191 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-config" (OuterVolumeSpecName: "config") pod "2229721b-1c0b-4ebb-b51c-8e74fc407cbe" (UID: "2229721b-1c0b-4ebb-b51c-8e74fc407cbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.855719 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.856021 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.856097 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.856166 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gsmg\" (UniqueName: \"kubernetes.io/projected/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-kube-api-access-4gsmg\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.856220 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.856270 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2229721b-1c0b-4ebb-b51c-8e74fc407cbe-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:48:54 crc kubenswrapper[4695]: I1126 13:48:54.987190 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-pblxz"] Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.113141 4695 generic.go:334] "Generic (PLEG): container finished" podID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerID="8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2" exitCode=0 Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.113176 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.113209 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" event={"ID":"2229721b-1c0b-4ebb-b51c-8e74fc407cbe","Type":"ContainerDied","Data":"8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2"} Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.113262 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-vh7hj" event={"ID":"2229721b-1c0b-4ebb-b51c-8e74fc407cbe","Type":"ContainerDied","Data":"bc2dd3c8fd02a410ae2d4b0810f11c093bddac50639e7b5293a4511ad6b86f33"} Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.113279 4695 scope.go:117] "RemoveContainer" containerID="8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2" Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.115742 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" event={"ID":"86275742-143a-41e5-8029-aa251663c12e","Type":"ContainerStarted","Data":"6fe13801ae78718ee961644bc7211353d9f8b2a54859793b070ef11fe98548e1"} Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.145862 4695 scope.go:117] "RemoveContainer" containerID="54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b" Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.151037 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-vh7hj"] Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.158709 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-vh7hj"] Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.173256 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" path="/var/lib/kubelet/pods/2229721b-1c0b-4ebb-b51c-8e74fc407cbe/volumes" Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.315859 4695 scope.go:117] "RemoveContainer" containerID="8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2" Nov 26 13:48:55 crc kubenswrapper[4695]: E1126 13:48:55.316364 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2\": container with ID starting with 8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2 not found: ID does not exist" containerID="8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2" Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.316419 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2"} err="failed to get container status \"8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2\": rpc error: code = NotFound desc = could not find container \"8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2\": container with ID starting with 8ee3807a382bb2ca9f3e4dca65d5e143aaaa3fd0c2b195e80eab977a28239bc2 not found: ID does not exist" Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.316452 4695 scope.go:117] "RemoveContainer" containerID="54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b" Nov 26 13:48:55 crc kubenswrapper[4695]: E1126 13:48:55.316911 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b\": container with ID starting with 54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b not found: ID does not exist" containerID="54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b" Nov 26 13:48:55 crc kubenswrapper[4695]: I1126 13:48:55.316945 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b"} err="failed to get container status \"54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b\": rpc error: code = NotFound desc = could not find container \"54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b\": container with ID starting with 54a3b47080ad3cf408dfc9512302301bba2934849ed6db59458db0021518a34b not found: ID does not exist" Nov 26 13:48:56 crc kubenswrapper[4695]: I1126 13:48:56.126600 4695 generic.go:334] "Generic (PLEG): container finished" podID="86275742-143a-41e5-8029-aa251663c12e" containerID="318436b01194ef91c6a2dbf705fea8cb0e0d1e1de16f07cb885aa96471cb2a42" exitCode=0 Nov 26 13:48:56 crc kubenswrapper[4695]: I1126 13:48:56.126644 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" event={"ID":"86275742-143a-41e5-8029-aa251663c12e","Type":"ContainerDied","Data":"318436b01194ef91c6a2dbf705fea8cb0e0d1e1de16f07cb885aa96471cb2a42"} Nov 26 13:48:57 crc kubenswrapper[4695]: I1126 13:48:57.159398 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" event={"ID":"86275742-143a-41e5-8029-aa251663c12e","Type":"ContainerStarted","Data":"70e96d4e97e6ed772709b3158f932e8fab422b5f52f8b2b906805f8abec2776e"} Nov 26 13:48:57 crc kubenswrapper[4695]: I1126 13:48:57.159687 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:48:57 crc kubenswrapper[4695]: I1126 13:48:57.186801 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" podStartSLOduration=3.186782014 podStartE2EDuration="3.186782014s" podCreationTimestamp="2025-11-26 13:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:48:57.181510756 +0000 UTC m=+1520.817335858" watchObservedRunningTime="2025-11-26 13:48:57.186782014 +0000 UTC m=+1520.822607096" Nov 26 13:49:04 crc kubenswrapper[4695]: I1126 13:49:04.520655 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8c6f6df99-pblxz" Nov 26 13:49:04 crc kubenswrapper[4695]: I1126 13:49:04.589388 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-xww4t"] Nov 26 13:49:04 crc kubenswrapper[4695]: I1126 13:49:04.589689 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" podUID="b529f99b-51ee-4d66-89de-008ed26d474f" containerName="dnsmasq-dns" containerID="cri-o://66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9" gracePeriod=10 Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.079838 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.135154 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-config\") pod \"b529f99b-51ee-4d66-89de-008ed26d474f\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.135292 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-nb\") pod \"b529f99b-51ee-4d66-89de-008ed26d474f\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.135323 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xstc6\" (UniqueName: \"kubernetes.io/projected/b529f99b-51ee-4d66-89de-008ed26d474f-kube-api-access-xstc6\") pod \"b529f99b-51ee-4d66-89de-008ed26d474f\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.135365 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-svc\") pod \"b529f99b-51ee-4d66-89de-008ed26d474f\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.140555 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b529f99b-51ee-4d66-89de-008ed26d474f-kube-api-access-xstc6" (OuterVolumeSpecName: "kube-api-access-xstc6") pod "b529f99b-51ee-4d66-89de-008ed26d474f" (UID: "b529f99b-51ee-4d66-89de-008ed26d474f"). InnerVolumeSpecName "kube-api-access-xstc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.194215 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-config" (OuterVolumeSpecName: "config") pod "b529f99b-51ee-4d66-89de-008ed26d474f" (UID: "b529f99b-51ee-4d66-89de-008ed26d474f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.200117 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b529f99b-51ee-4d66-89de-008ed26d474f" (UID: "b529f99b-51ee-4d66-89de-008ed26d474f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.201473 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b529f99b-51ee-4d66-89de-008ed26d474f" (UID: "b529f99b-51ee-4d66-89de-008ed26d474f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.239542 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-swift-storage-0\") pod \"b529f99b-51ee-4d66-89de-008ed26d474f\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.239591 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-sb\") pod \"b529f99b-51ee-4d66-89de-008ed26d474f\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.239625 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-openstack-edpm-ipam\") pod \"b529f99b-51ee-4d66-89de-008ed26d474f\" (UID: \"b529f99b-51ee-4d66-89de-008ed26d474f\") " Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.240116 4695 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.240136 4695 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-config\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.240149 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.240164 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xstc6\" (UniqueName: \"kubernetes.io/projected/b529f99b-51ee-4d66-89de-008ed26d474f-kube-api-access-xstc6\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.261580 4695 generic.go:334] "Generic (PLEG): container finished" podID="b529f99b-51ee-4d66-89de-008ed26d474f" containerID="66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9" exitCode=0 Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.261812 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" event={"ID":"b529f99b-51ee-4d66-89de-008ed26d474f","Type":"ContainerDied","Data":"66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9"} Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.261904 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" event={"ID":"b529f99b-51ee-4d66-89de-008ed26d474f","Type":"ContainerDied","Data":"529332013b26067108165dbcf41c1f47547bac74786b7a16354f56071b226427"} Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.261976 4695 scope.go:117] "RemoveContainer" containerID="66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.262418 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-xww4t" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.294934 4695 scope.go:117] "RemoveContainer" containerID="6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.296167 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "b529f99b-51ee-4d66-89de-008ed26d474f" (UID: "b529f99b-51ee-4d66-89de-008ed26d474f"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.310545 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b529f99b-51ee-4d66-89de-008ed26d474f" (UID: "b529f99b-51ee-4d66-89de-008ed26d474f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.326178 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b529f99b-51ee-4d66-89de-008ed26d474f" (UID: "b529f99b-51ee-4d66-89de-008ed26d474f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.331132 4695 scope.go:117] "RemoveContainer" containerID="66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9" Nov 26 13:49:05 crc kubenswrapper[4695]: E1126 13:49:05.331659 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9\": container with ID starting with 66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9 not found: ID does not exist" containerID="66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.331702 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9"} err="failed to get container status \"66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9\": rpc error: code = NotFound desc = could not find container \"66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9\": container with ID starting with 66d342745cc076e3eafe441be5dc3f32c98141c52314654eeaafe0a5d947e3a9 not found: ID does not exist" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.331726 4695 scope.go:117] "RemoveContainer" containerID="6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852" Nov 26 13:49:05 crc kubenswrapper[4695]: E1126 13:49:05.332243 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852\": container with ID starting with 6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852 not found: ID does not exist" containerID="6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.332272 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852"} err="failed to get container status \"6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852\": rpc error: code = NotFound desc = could not find container \"6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852\": container with ID starting with 6724045da503ca0356f2229b1d5484c1dfe9cb1f9a1e6c480a7b88d1b7107852 not found: ID does not exist" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.341338 4695 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.341392 4695 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.341405 4695 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/b529f99b-51ee-4d66-89de-008ed26d474f-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.591779 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-xww4t"] Nov 26 13:49:05 crc kubenswrapper[4695]: I1126 13:49:05.602187 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-xww4t"] Nov 26 13:49:06 crc kubenswrapper[4695]: I1126 13:49:06.397165 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:49:06 crc kubenswrapper[4695]: I1126 13:49:06.397277 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:49:07 crc kubenswrapper[4695]: I1126 13:49:07.177722 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b529f99b-51ee-4d66-89de-008ed26d474f" path="/var/lib/kubelet/pods/b529f99b-51ee-4d66-89de-008ed26d474f/volumes" Nov 26 13:49:15 crc kubenswrapper[4695]: I1126 13:49:15.511515 4695 generic.go:334] "Generic (PLEG): container finished" podID="51da5818-5d05-4f99-84a7-93eae660a8a7" containerID="5a487de214a43bb6c188f5238c1d25e53683edcacc1d2917014d10b8b11eb602" exitCode=0 Nov 26 13:49:15 crc kubenswrapper[4695]: I1126 13:49:15.511622 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"51da5818-5d05-4f99-84a7-93eae660a8a7","Type":"ContainerDied","Data":"5a487de214a43bb6c188f5238c1d25e53683edcacc1d2917014d10b8b11eb602"} Nov 26 13:49:15 crc kubenswrapper[4695]: I1126 13:49:15.515315 4695 generic.go:334] "Generic (PLEG): container finished" podID="e7335d8e-0d9a-4532-9f5b-d91cafe38ca7" containerID="cd795a39cb9d744fb6c4f216fbc6eae04cdc0e1e9194b6d7e30ee4f38e444c7e" exitCode=0 Nov 26 13:49:15 crc kubenswrapper[4695]: I1126 13:49:15.515390 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7","Type":"ContainerDied","Data":"cd795a39cb9d744fb6c4f216fbc6eae04cdc0e1e9194b6d7e30ee4f38e444c7e"} Nov 26 13:49:16 crc kubenswrapper[4695]: I1126 13:49:16.524956 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e7335d8e-0d9a-4532-9f5b-d91cafe38ca7","Type":"ContainerStarted","Data":"123c2ae490c3b656bf3541c81cdb208eb1aaea14d9eba0907d4c1fcb8c4ed8a0"} Nov 26 13:49:16 crc kubenswrapper[4695]: I1126 13:49:16.525541 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:49:16 crc kubenswrapper[4695]: I1126 13:49:16.527767 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"51da5818-5d05-4f99-84a7-93eae660a8a7","Type":"ContainerStarted","Data":"1044d04c0816873d290dd4dc9077b6b1c7abec9f711eb7ab42489689446f312a"} Nov 26 13:49:16 crc kubenswrapper[4695]: I1126 13:49:16.527976 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 26 13:49:16 crc kubenswrapper[4695]: I1126 13:49:16.549961 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.549942737 podStartE2EDuration="36.549942737s" podCreationTimestamp="2025-11-26 13:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:49:16.5428323 +0000 UTC m=+1540.178657382" watchObservedRunningTime="2025-11-26 13:49:16.549942737 +0000 UTC m=+1540.185767819" Nov 26 13:49:16 crc kubenswrapper[4695]: I1126 13:49:16.572264 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.572242116 podStartE2EDuration="36.572242116s" podCreationTimestamp="2025-11-26 13:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 13:49:16.566296687 +0000 UTC m=+1540.202121759" watchObservedRunningTime="2025-11-26 13:49:16.572242116 +0000 UTC m=+1540.208067208" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.641494 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78"] Nov 26 13:49:17 crc kubenswrapper[4695]: E1126 13:49:17.642163 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f99b-51ee-4d66-89de-008ed26d474f" containerName="init" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.642175 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f99b-51ee-4d66-89de-008ed26d474f" containerName="init" Nov 26 13:49:17 crc kubenswrapper[4695]: E1126 13:49:17.642190 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerName="dnsmasq-dns" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.642196 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerName="dnsmasq-dns" Nov 26 13:49:17 crc kubenswrapper[4695]: E1126 13:49:17.642206 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b529f99b-51ee-4d66-89de-008ed26d474f" containerName="dnsmasq-dns" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.642212 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b529f99b-51ee-4d66-89de-008ed26d474f" containerName="dnsmasq-dns" Nov 26 13:49:17 crc kubenswrapper[4695]: E1126 13:49:17.642220 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerName="init" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.642225 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerName="init" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.642412 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b529f99b-51ee-4d66-89de-008ed26d474f" containerName="dnsmasq-dns" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.642431 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="2229721b-1c0b-4ebb-b51c-8e74fc407cbe" containerName="dnsmasq-dns" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.647949 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.659694 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78"] Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.660629 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.660932 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.661167 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.664479 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.701466 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.701528 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.701586 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv86c\" (UniqueName: \"kubernetes.io/projected/d8407b26-4534-4252-bccf-4e82cea0cd6e-kube-api-access-fv86c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.701609 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.803491 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv86c\" (UniqueName: \"kubernetes.io/projected/d8407b26-4534-4252-bccf-4e82cea0cd6e-kube-api-access-fv86c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.803543 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.803652 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.803688 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.811320 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.811417 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.813598 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.831678 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv86c\" (UniqueName: \"kubernetes.io/projected/d8407b26-4534-4252-bccf-4e82cea0cd6e-kube-api-access-fv86c\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:17 crc kubenswrapper[4695]: I1126 13:49:17.982999 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:18 crc kubenswrapper[4695]: I1126 13:49:18.547018 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78"] Nov 26 13:49:19 crc kubenswrapper[4695]: I1126 13:49:19.577700 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" event={"ID":"d8407b26-4534-4252-bccf-4e82cea0cd6e","Type":"ContainerStarted","Data":"2a7ab589a77c7933cdcfed4a4387a228327755f7a29832a86d4922e93f6c2f5a"} Nov 26 13:49:29 crc kubenswrapper[4695]: I1126 13:49:29.672958 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" event={"ID":"d8407b26-4534-4252-bccf-4e82cea0cd6e","Type":"ContainerStarted","Data":"fffd33f223ec71280865ce91834792a126475a8237070982ae39195626a21f92"} Nov 26 13:49:29 crc kubenswrapper[4695]: I1126 13:49:29.698795 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" podStartSLOduration=2.665210019 podStartE2EDuration="12.698777102s" podCreationTimestamp="2025-11-26 13:49:17 +0000 UTC" firstStartedPulling="2025-11-26 13:49:18.568383165 +0000 UTC m=+1542.204208247" lastFinishedPulling="2025-11-26 13:49:28.601950248 +0000 UTC m=+1552.237775330" observedRunningTime="2025-11-26 13:49:29.696180199 +0000 UTC m=+1553.332005321" watchObservedRunningTime="2025-11-26 13:49:29.698777102 +0000 UTC m=+1553.334602174" Nov 26 13:49:30 crc kubenswrapper[4695]: I1126 13:49:30.680995 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 26 13:49:31 crc kubenswrapper[4695]: I1126 13:49:31.354553 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 26 13:49:36 crc kubenswrapper[4695]: I1126 13:49:36.396834 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:49:36 crc kubenswrapper[4695]: I1126 13:49:36.397464 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:49:40 crc kubenswrapper[4695]: I1126 13:49:40.783800 4695 generic.go:334] "Generic (PLEG): container finished" podID="d8407b26-4534-4252-bccf-4e82cea0cd6e" containerID="fffd33f223ec71280865ce91834792a126475a8237070982ae39195626a21f92" exitCode=0 Nov 26 13:49:40 crc kubenswrapper[4695]: I1126 13:49:40.783859 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" event={"ID":"d8407b26-4534-4252-bccf-4e82cea0cd6e","Type":"ContainerDied","Data":"fffd33f223ec71280865ce91834792a126475a8237070982ae39195626a21f92"} Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.264875 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.379817 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-ssh-key\") pod \"d8407b26-4534-4252-bccf-4e82cea0cd6e\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.379965 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-inventory\") pod \"d8407b26-4534-4252-bccf-4e82cea0cd6e\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.380026 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv86c\" (UniqueName: \"kubernetes.io/projected/d8407b26-4534-4252-bccf-4e82cea0cd6e-kube-api-access-fv86c\") pod \"d8407b26-4534-4252-bccf-4e82cea0cd6e\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.380146 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-repo-setup-combined-ca-bundle\") pod \"d8407b26-4534-4252-bccf-4e82cea0cd6e\" (UID: \"d8407b26-4534-4252-bccf-4e82cea0cd6e\") " Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.386361 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d8407b26-4534-4252-bccf-4e82cea0cd6e" (UID: "d8407b26-4534-4252-bccf-4e82cea0cd6e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.394243 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8407b26-4534-4252-bccf-4e82cea0cd6e-kube-api-access-fv86c" (OuterVolumeSpecName: "kube-api-access-fv86c") pod "d8407b26-4534-4252-bccf-4e82cea0cd6e" (UID: "d8407b26-4534-4252-bccf-4e82cea0cd6e"). InnerVolumeSpecName "kube-api-access-fv86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.422779 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-inventory" (OuterVolumeSpecName: "inventory") pod "d8407b26-4534-4252-bccf-4e82cea0cd6e" (UID: "d8407b26-4534-4252-bccf-4e82cea0cd6e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.437672 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d8407b26-4534-4252-bccf-4e82cea0cd6e" (UID: "d8407b26-4534-4252-bccf-4e82cea0cd6e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.482897 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv86c\" (UniqueName: \"kubernetes.io/projected/d8407b26-4534-4252-bccf-4e82cea0cd6e-kube-api-access-fv86c\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.482952 4695 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.482964 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.482988 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d8407b26-4534-4252-bccf-4e82cea0cd6e-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.804156 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" event={"ID":"d8407b26-4534-4252-bccf-4e82cea0cd6e","Type":"ContainerDied","Data":"2a7ab589a77c7933cdcfed4a4387a228327755f7a29832a86d4922e93f6c2f5a"} Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.804207 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7ab589a77c7933cdcfed4a4387a228327755f7a29832a86d4922e93f6c2f5a" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.804206 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.892044 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt"] Nov 26 13:49:42 crc kubenswrapper[4695]: E1126 13:49:42.892571 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8407b26-4534-4252-bccf-4e82cea0cd6e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.892597 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8407b26-4534-4252-bccf-4e82cea0cd6e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.892812 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8407b26-4534-4252-bccf-4e82cea0cd6e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.893617 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.906231 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt"] Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.930869 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.930867 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.930964 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.932373 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.997310 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.997600 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8jd5\" (UniqueName: \"kubernetes.io/projected/6d8a0921-3704-485a-8ee9-c6250fd2d59e-kube-api-access-m8jd5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:42 crc kubenswrapper[4695]: I1126 13:49:42.997694 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.099272 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.099425 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.099566 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8jd5\" (UniqueName: \"kubernetes.io/projected/6d8a0921-3704-485a-8ee9-c6250fd2d59e-kube-api-access-m8jd5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.104078 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.106859 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.125016 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8jd5\" (UniqueName: \"kubernetes.io/projected/6d8a0921-3704-485a-8ee9-c6250fd2d59e-kube-api-access-m8jd5\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-2pppt\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.245896 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.730939 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt"] Nov 26 13:49:43 crc kubenswrapper[4695]: I1126 13:49:43.814651 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" event={"ID":"6d8a0921-3704-485a-8ee9-c6250fd2d59e","Type":"ContainerStarted","Data":"17686bb9afa69ebc0e92f9e557cf0fa1028505fd6a3e36bfed4ee82249521351"} Nov 26 13:49:44 crc kubenswrapper[4695]: I1126 13:49:44.825436 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" event={"ID":"6d8a0921-3704-485a-8ee9-c6250fd2d59e","Type":"ContainerStarted","Data":"97d9aec8dcbfea765c1387ac18a9fd4bb05649a9ae999b131142c5b34aee4eea"} Nov 26 13:49:44 crc kubenswrapper[4695]: I1126 13:49:44.850231 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" podStartSLOduration=2.270987666 podStartE2EDuration="2.850214907s" podCreationTimestamp="2025-11-26 13:49:42 +0000 UTC" firstStartedPulling="2025-11-26 13:49:43.734394842 +0000 UTC m=+1567.370219924" lastFinishedPulling="2025-11-26 13:49:44.313622083 +0000 UTC m=+1567.949447165" observedRunningTime="2025-11-26 13:49:44.845981901 +0000 UTC m=+1568.481806983" watchObservedRunningTime="2025-11-26 13:49:44.850214907 +0000 UTC m=+1568.486039979" Nov 26 13:49:47 crc kubenswrapper[4695]: I1126 13:49:47.849299 4695 generic.go:334] "Generic (PLEG): container finished" podID="6d8a0921-3704-485a-8ee9-c6250fd2d59e" containerID="97d9aec8dcbfea765c1387ac18a9fd4bb05649a9ae999b131142c5b34aee4eea" exitCode=0 Nov 26 13:49:47 crc kubenswrapper[4695]: I1126 13:49:47.849377 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" event={"ID":"6d8a0921-3704-485a-8ee9-c6250fd2d59e","Type":"ContainerDied","Data":"97d9aec8dcbfea765c1387ac18a9fd4bb05649a9ae999b131142c5b34aee4eea"} Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.265044 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.416220 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-inventory\") pod \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.416436 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-ssh-key\") pod \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.416534 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8jd5\" (UniqueName: \"kubernetes.io/projected/6d8a0921-3704-485a-8ee9-c6250fd2d59e-kube-api-access-m8jd5\") pod \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\" (UID: \"6d8a0921-3704-485a-8ee9-c6250fd2d59e\") " Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.424421 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d8a0921-3704-485a-8ee9-c6250fd2d59e-kube-api-access-m8jd5" (OuterVolumeSpecName: "kube-api-access-m8jd5") pod "6d8a0921-3704-485a-8ee9-c6250fd2d59e" (UID: "6d8a0921-3704-485a-8ee9-c6250fd2d59e"). InnerVolumeSpecName "kube-api-access-m8jd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.445576 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-inventory" (OuterVolumeSpecName: "inventory") pod "6d8a0921-3704-485a-8ee9-c6250fd2d59e" (UID: "6d8a0921-3704-485a-8ee9-c6250fd2d59e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.455686 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6d8a0921-3704-485a-8ee9-c6250fd2d59e" (UID: "6d8a0921-3704-485a-8ee9-c6250fd2d59e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.519241 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.519272 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8jd5\" (UniqueName: \"kubernetes.io/projected/6d8a0921-3704-485a-8ee9-c6250fd2d59e-kube-api-access-m8jd5\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.519285 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6d8a0921-3704-485a-8ee9-c6250fd2d59e-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.868543 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" event={"ID":"6d8a0921-3704-485a-8ee9-c6250fd2d59e","Type":"ContainerDied","Data":"17686bb9afa69ebc0e92f9e557cf0fa1028505fd6a3e36bfed4ee82249521351"} Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.868591 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17686bb9afa69ebc0e92f9e557cf0fa1028505fd6a3e36bfed4ee82249521351" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.868593 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-2pppt" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.935298 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x"] Nov 26 13:49:49 crc kubenswrapper[4695]: E1126 13:49:49.935829 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d8a0921-3704-485a-8ee9-c6250fd2d59e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.935854 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d8a0921-3704-485a-8ee9-c6250fd2d59e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.936130 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d8a0921-3704-485a-8ee9-c6250fd2d59e" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.936976 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.943489 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.944225 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.944365 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.944567 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:49:49 crc kubenswrapper[4695]: I1126 13:49:49.951591 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x"] Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.028282 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.028463 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.028568 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4mc\" (UniqueName: \"kubernetes.io/projected/6b85ca84-0932-4ed9-bcc9-883e52f07315-kube-api-access-tg4mc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.028621 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.130761 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.130871 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.130942 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg4mc\" (UniqueName: \"kubernetes.io/projected/6b85ca84-0932-4ed9-bcc9-883e52f07315-kube-api-access-tg4mc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.130989 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.134965 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.135183 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.136771 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.157991 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg4mc\" (UniqueName: \"kubernetes.io/projected/6b85ca84-0932-4ed9-bcc9-883e52f07315-kube-api-access-tg4mc\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.253553 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.799523 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x"] Nov 26 13:49:50 crc kubenswrapper[4695]: I1126 13:49:50.880485 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" event={"ID":"6b85ca84-0932-4ed9-bcc9-883e52f07315","Type":"ContainerStarted","Data":"c711b66516e1f81e9b4bacd9307752de4479b1227cb856f645fcc781db022357"} Nov 26 13:49:51 crc kubenswrapper[4695]: I1126 13:49:51.892641 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" event={"ID":"6b85ca84-0932-4ed9-bcc9-883e52f07315","Type":"ContainerStarted","Data":"2504c9355edaeb3152e57fc2ae51301bc9b73dc4a120c5be326176fa298ec327"} Nov 26 13:49:51 crc kubenswrapper[4695]: I1126 13:49:51.931602 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" podStartSLOduration=2.475946339 podStartE2EDuration="2.931547635s" podCreationTimestamp="2025-11-26 13:49:49 +0000 UTC" firstStartedPulling="2025-11-26 13:49:50.800440194 +0000 UTC m=+1574.436265276" lastFinishedPulling="2025-11-26 13:49:51.25604148 +0000 UTC m=+1574.891866572" observedRunningTime="2025-11-26 13:49:51.920870235 +0000 UTC m=+1575.556695327" watchObservedRunningTime="2025-11-26 13:49:51.931547635 +0000 UTC m=+1575.567372727" Nov 26 13:50:04 crc kubenswrapper[4695]: I1126 13:50:04.997118 4695 scope.go:117] "RemoveContainer" containerID="e04dba50bdacaa6d716eb39c213506ec7380b5a69f95a1b88ecc934d4e75d15c" Nov 26 13:50:05 crc kubenswrapper[4695]: I1126 13:50:05.031746 4695 scope.go:117] "RemoveContainer" containerID="6f2aa02fe29dad19cf15659a8abdfbde73ae603d00446fc255df2414c759783e" Nov 26 13:50:06 crc kubenswrapper[4695]: I1126 13:50:06.397109 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:50:06 crc kubenswrapper[4695]: I1126 13:50:06.397563 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:50:06 crc kubenswrapper[4695]: I1126 13:50:06.397615 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:50:06 crc kubenswrapper[4695]: I1126 13:50:06.398419 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:50:06 crc kubenswrapper[4695]: I1126 13:50:06.398490 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" gracePeriod=600 Nov 26 13:50:06 crc kubenswrapper[4695]: E1126 13:50:06.533364 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:50:07 crc kubenswrapper[4695]: I1126 13:50:07.056333 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" exitCode=0 Nov 26 13:50:07 crc kubenswrapper[4695]: I1126 13:50:07.056712 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306"} Nov 26 13:50:07 crc kubenswrapper[4695]: I1126 13:50:07.056752 4695 scope.go:117] "RemoveContainer" containerID="d704a070a53ff2c48f0d0e2fcd3340ab686270f91e63ed80bdf2afa4d7bc31e8" Nov 26 13:50:07 crc kubenswrapper[4695]: I1126 13:50:07.057775 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:50:07 crc kubenswrapper[4695]: E1126 13:50:07.058526 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:50:21 crc kubenswrapper[4695]: I1126 13:50:21.163429 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:50:21 crc kubenswrapper[4695]: E1126 13:50:21.165414 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:50:33 crc kubenswrapper[4695]: I1126 13:50:33.162639 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:50:33 crc kubenswrapper[4695]: E1126 13:50:33.163382 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:50:47 crc kubenswrapper[4695]: I1126 13:50:47.169563 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:50:47 crc kubenswrapper[4695]: E1126 13:50:47.170330 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:51:02 crc kubenswrapper[4695]: I1126 13:51:02.162733 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:51:02 crc kubenswrapper[4695]: E1126 13:51:02.163445 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:51:05 crc kubenswrapper[4695]: I1126 13:51:05.228098 4695 scope.go:117] "RemoveContainer" containerID="eda59ffe971d04f35835e6a706624104e1248cb7938d7c9cfd9bf605fbb4cead" Nov 26 13:51:13 crc kubenswrapper[4695]: I1126 13:51:13.162880 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:51:13 crc kubenswrapper[4695]: E1126 13:51:13.163543 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:51:24 crc kubenswrapper[4695]: I1126 13:51:24.163400 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:51:24 crc kubenswrapper[4695]: E1126 13:51:24.164471 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:51:36 crc kubenswrapper[4695]: I1126 13:51:36.163115 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:51:36 crc kubenswrapper[4695]: E1126 13:51:36.164106 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:51:47 crc kubenswrapper[4695]: I1126 13:51:47.171631 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:51:47 crc kubenswrapper[4695]: E1126 13:51:47.172363 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:51:59 crc kubenswrapper[4695]: I1126 13:51:59.163891 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:51:59 crc kubenswrapper[4695]: E1126 13:51:59.164736 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:52:05 crc kubenswrapper[4695]: I1126 13:52:05.311073 4695 scope.go:117] "RemoveContainer" containerID="d7814041a442373424a893c2909448005d29f818124b28d7bdfdd3db02375560" Nov 26 13:52:05 crc kubenswrapper[4695]: I1126 13:52:05.338381 4695 scope.go:117] "RemoveContainer" containerID="562b4e0156c8a3cb6087ec24630c65fbfba793c2a1ebaa6cd5974eb5571b7d9d" Nov 26 13:52:05 crc kubenswrapper[4695]: I1126 13:52:05.366387 4695 scope.go:117] "RemoveContainer" containerID="0aee747e74c12cdfc35dbc8f4f260250b54689d530cf1dd81bc49f1f55f3f381" Nov 26 13:52:05 crc kubenswrapper[4695]: I1126 13:52:05.391433 4695 scope.go:117] "RemoveContainer" containerID="a6447666e61865216259c7f6f5d4e32a29d90e7d887dd0c31dbb838eea65a003" Nov 26 13:52:10 crc kubenswrapper[4695]: I1126 13:52:10.161787 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:52:10 crc kubenswrapper[4695]: E1126 13:52:10.163574 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:52:24 crc kubenswrapper[4695]: I1126 13:52:24.162582 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:52:24 crc kubenswrapper[4695]: E1126 13:52:24.163392 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:52:39 crc kubenswrapper[4695]: I1126 13:52:39.163093 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:52:39 crc kubenswrapper[4695]: E1126 13:52:39.163907 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:52:53 crc kubenswrapper[4695]: I1126 13:52:53.162708 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:52:53 crc kubenswrapper[4695]: E1126 13:52:53.164115 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:52:59 crc kubenswrapper[4695]: I1126 13:52:59.682688 4695 generic.go:334] "Generic (PLEG): container finished" podID="6b85ca84-0932-4ed9-bcc9-883e52f07315" containerID="2504c9355edaeb3152e57fc2ae51301bc9b73dc4a120c5be326176fa298ec327" exitCode=0 Nov 26 13:52:59 crc kubenswrapper[4695]: I1126 13:52:59.682792 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" event={"ID":"6b85ca84-0932-4ed9-bcc9-883e52f07315","Type":"ContainerDied","Data":"2504c9355edaeb3152e57fc2ae51301bc9b73dc4a120c5be326176fa298ec327"} Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.135239 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.267019 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-inventory\") pod \"6b85ca84-0932-4ed9-bcc9-883e52f07315\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.267514 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg4mc\" (UniqueName: \"kubernetes.io/projected/6b85ca84-0932-4ed9-bcc9-883e52f07315-kube-api-access-tg4mc\") pod \"6b85ca84-0932-4ed9-bcc9-883e52f07315\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.267550 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-bootstrap-combined-ca-bundle\") pod \"6b85ca84-0932-4ed9-bcc9-883e52f07315\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.267616 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-ssh-key\") pod \"6b85ca84-0932-4ed9-bcc9-883e52f07315\" (UID: \"6b85ca84-0932-4ed9-bcc9-883e52f07315\") " Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.273598 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "6b85ca84-0932-4ed9-bcc9-883e52f07315" (UID: "6b85ca84-0932-4ed9-bcc9-883e52f07315"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.276485 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b85ca84-0932-4ed9-bcc9-883e52f07315-kube-api-access-tg4mc" (OuterVolumeSpecName: "kube-api-access-tg4mc") pod "6b85ca84-0932-4ed9-bcc9-883e52f07315" (UID: "6b85ca84-0932-4ed9-bcc9-883e52f07315"). InnerVolumeSpecName "kube-api-access-tg4mc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.300071 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-inventory" (OuterVolumeSpecName: "inventory") pod "6b85ca84-0932-4ed9-bcc9-883e52f07315" (UID: "6b85ca84-0932-4ed9-bcc9-883e52f07315"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.324734 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6b85ca84-0932-4ed9-bcc9-883e52f07315" (UID: "6b85ca84-0932-4ed9-bcc9-883e52f07315"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.370581 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.370620 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg4mc\" (UniqueName: \"kubernetes.io/projected/6b85ca84-0932-4ed9-bcc9-883e52f07315-kube-api-access-tg4mc\") on node \"crc\" DevicePath \"\"" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.370635 4695 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.370647 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6b85ca84-0932-4ed9-bcc9-883e52f07315-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.707910 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" event={"ID":"6b85ca84-0932-4ed9-bcc9-883e52f07315","Type":"ContainerDied","Data":"c711b66516e1f81e9b4bacd9307752de4479b1227cb856f645fcc781db022357"} Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.707959 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c711b66516e1f81e9b4bacd9307752de4479b1227cb856f645fcc781db022357" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.707973 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.814201 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24"] Nov 26 13:53:01 crc kubenswrapper[4695]: E1126 13:53:01.814917 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b85ca84-0932-4ed9-bcc9-883e52f07315" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.814948 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b85ca84-0932-4ed9-bcc9-883e52f07315" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.815180 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b85ca84-0932-4ed9-bcc9-883e52f07315" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.816096 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.817765 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.818503 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.819083 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.820574 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.835565 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24"] Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.879434 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.879531 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.879585 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snx8m\" (UniqueName: \"kubernetes.io/projected/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-kube-api-access-snx8m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.981753 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.981824 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.981860 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snx8m\" (UniqueName: \"kubernetes.io/projected/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-kube-api-access-snx8m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.985867 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:01 crc kubenswrapper[4695]: I1126 13:53:01.989944 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:02 crc kubenswrapper[4695]: I1126 13:53:02.007961 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snx8m\" (UniqueName: \"kubernetes.io/projected/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-kube-api-access-snx8m\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-4rf24\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:02 crc kubenswrapper[4695]: I1126 13:53:02.139646 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:53:02 crc kubenswrapper[4695]: I1126 13:53:02.636819 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24"] Nov 26 13:53:02 crc kubenswrapper[4695]: I1126 13:53:02.639323 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:53:02 crc kubenswrapper[4695]: I1126 13:53:02.717927 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" event={"ID":"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41","Type":"ContainerStarted","Data":"75427f9220e405df0868a68d844ab224dc0348c0b7d25458dd748fe47ecab8ed"} Nov 26 13:53:04 crc kubenswrapper[4695]: I1126 13:53:04.749850 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" event={"ID":"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41","Type":"ContainerStarted","Data":"f7294872e38c678a05afcfc81cbd0daa0f8ad0b346b64737c2491af67a27652d"} Nov 26 13:53:04 crc kubenswrapper[4695]: I1126 13:53:04.771774 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" podStartSLOduration=2.847544327 podStartE2EDuration="3.77175759s" podCreationTimestamp="2025-11-26 13:53:01 +0000 UTC" firstStartedPulling="2025-11-26 13:53:02.638969808 +0000 UTC m=+1766.274794910" lastFinishedPulling="2025-11-26 13:53:03.563183091 +0000 UTC m=+1767.199008173" observedRunningTime="2025-11-26 13:53:04.770636814 +0000 UTC m=+1768.406461926" watchObservedRunningTime="2025-11-26 13:53:04.77175759 +0000 UTC m=+1768.407582672" Nov 26 13:53:05 crc kubenswrapper[4695]: I1126 13:53:05.468285 4695 scope.go:117] "RemoveContainer" containerID="8828bb7f6835b9462a70898272fa408117c07aa7680fc303dbe5e7dab045fb07" Nov 26 13:53:05 crc kubenswrapper[4695]: I1126 13:53:05.487979 4695 scope.go:117] "RemoveContainer" containerID="733f930a67330efc3b7e971a6c4d06aed8428ef7f2152214c2c2140e214a33db" Nov 26 13:53:05 crc kubenswrapper[4695]: I1126 13:53:05.511488 4695 scope.go:117] "RemoveContainer" containerID="123562bd6aedd7f2b7b53a06afe9373129208af99f5a73ea4379b45efdb1a8f0" Nov 26 13:53:05 crc kubenswrapper[4695]: I1126 13:53:05.528815 4695 scope.go:117] "RemoveContainer" containerID="24d6ad1293d13b9f779604f8373a681222e97fc5a9ea7b8342d82affd941f075" Nov 26 13:53:07 crc kubenswrapper[4695]: I1126 13:53:07.171072 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:53:07 crc kubenswrapper[4695]: E1126 13:53:07.171729 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:53:18 crc kubenswrapper[4695]: I1126 13:53:18.162013 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:53:18 crc kubenswrapper[4695]: E1126 13:53:18.162567 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:53:29 crc kubenswrapper[4695]: I1126 13:53:29.162655 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:53:29 crc kubenswrapper[4695]: E1126 13:53:29.163438 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:53:31 crc kubenswrapper[4695]: I1126 13:53:31.045124 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-nbvwd"] Nov 26 13:53:31 crc kubenswrapper[4695]: I1126 13:53:31.053668 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-nbvwd"] Nov 26 13:53:31 crc kubenswrapper[4695]: I1126 13:53:31.175058 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0449bc56-cc0a-42e9-a59e-c89c8af8b64c" path="/var/lib/kubelet/pods/0449bc56-cc0a-42e9-a59e-c89c8af8b64c/volumes" Nov 26 13:53:32 crc kubenswrapper[4695]: I1126 13:53:32.036403 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-89jz2"] Nov 26 13:53:32 crc kubenswrapper[4695]: I1126 13:53:32.045746 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-89jz2"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.036190 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dm6ln"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.047630 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3897-account-create-update-dvm5t"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.060422 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3897-account-create-update-dvm5t"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.073744 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dm6ln"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.085310 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b232-account-create-update-rlm2c"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.092452 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b669-account-create-update-sksw2"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.100109 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b232-account-create-update-rlm2c"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.122812 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b669-account-create-update-sksw2"] Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.173002 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d3c131c-b74a-4f3e-bf6d-c490a8d300c7" path="/var/lib/kubelet/pods/6d3c131c-b74a-4f3e-bf6d-c490a8d300c7/volumes" Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.173796 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="889f9407-9537-4078-91f9-01e10810dd66" path="/var/lib/kubelet/pods/889f9407-9537-4078-91f9-01e10810dd66/volumes" Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.174325 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88cbdc83-2c93-4fa9-8fca-dcce51f8f59d" path="/var/lib/kubelet/pods/88cbdc83-2c93-4fa9-8fca-dcce51f8f59d/volumes" Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.174900 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba02b5bd-787d-4939-bfd7-5875a25173dc" path="/var/lib/kubelet/pods/ba02b5bd-787d-4939-bfd7-5875a25173dc/volumes" Nov 26 13:53:33 crc kubenswrapper[4695]: I1126 13:53:33.175991 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce3c192f-2601-41cc-a59f-716af5cffc4c" path="/var/lib/kubelet/pods/ce3c192f-2601-41cc-a59f-716af5cffc4c/volumes" Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.042987 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-k62d7"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.053305 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-c5874"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.065940 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8be9-account-create-update-shnqq"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.078729 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-k62d7"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.090484 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-9d5a-account-create-update-h8n7j"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.101296 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-c5874"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.112859 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8be9-account-create-update-shnqq"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.121905 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-9d5a-account-create-update-h8n7j"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.130330 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e1ab-account-create-update-tqhkx"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.138653 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-n95vb"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.149207 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e1ab-account-create-update-tqhkx"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.162509 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-n95vb"] Nov 26 13:53:40 crc kubenswrapper[4695]: I1126 13:53:40.162987 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:53:40 crc kubenswrapper[4695]: E1126 13:53:40.163234 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:53:41 crc kubenswrapper[4695]: I1126 13:53:41.173284 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4820ce3c-a324-4d56-a27a-9dd2695e286f" path="/var/lib/kubelet/pods/4820ce3c-a324-4d56-a27a-9dd2695e286f/volumes" Nov 26 13:53:41 crc kubenswrapper[4695]: I1126 13:53:41.173947 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c86a814-b63d-48d8-98de-6f560864c876" path="/var/lib/kubelet/pods/4c86a814-b63d-48d8-98de-6f560864c876/volumes" Nov 26 13:53:41 crc kubenswrapper[4695]: I1126 13:53:41.174491 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c693edf-bd03-49b1-b26e-0bca3bae1cf0" path="/var/lib/kubelet/pods/9c693edf-bd03-49b1-b26e-0bca3bae1cf0/volumes" Nov 26 13:53:41 crc kubenswrapper[4695]: I1126 13:53:41.175017 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d" path="/var/lib/kubelet/pods/a17d04d7-2a8b-4b1f-8aa4-7472ab0ce99d/volumes" Nov 26 13:53:41 crc kubenswrapper[4695]: I1126 13:53:41.175998 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b26d39e6-265d-44a3-a6f3-ca353fa4d0a9" path="/var/lib/kubelet/pods/b26d39e6-265d-44a3-a6f3-ca353fa4d0a9/volumes" Nov 26 13:53:41 crc kubenswrapper[4695]: I1126 13:53:41.176536 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d572e570-4517-4440-9226-7c432f0e318c" path="/var/lib/kubelet/pods/d572e570-4517-4440-9226-7c432f0e318c/volumes" Nov 26 13:53:54 crc kubenswrapper[4695]: I1126 13:53:54.162558 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:53:54 crc kubenswrapper[4695]: E1126 13:53:54.163795 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.586522 4695 scope.go:117] "RemoveContainer" containerID="7fa0c9ffc72bc74b80d11112d408d191f3e8f6cd3d12c6f4b7c26c24052562c9" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.615563 4695 scope.go:117] "RemoveContainer" containerID="1990c48e5adce583bb4dec1a557c2138c9c77c9d2125f0f4a63eff0e766767ec" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.664686 4695 scope.go:117] "RemoveContainer" containerID="1146cbdefb024635c96fcb30c9a43949d0daf6fad215a9e9dfb8d21d690e8efd" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.704185 4695 scope.go:117] "RemoveContainer" containerID="c6b71a53d6ee48329275f67c4ac1c81c799db724d6e3d7d7c942b83790991dc0" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.777265 4695 scope.go:117] "RemoveContainer" containerID="d3784b39c3303c2f7ae6d1b9d49a4eaf104ae83032ec49fc7c88c3e3fb91e7cc" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.800474 4695 scope.go:117] "RemoveContainer" containerID="e145ca5294c61ea43fc706c3ba959e1770f298eb127f1ae6ad2a8dbb8d4f7e06" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.842485 4695 scope.go:117] "RemoveContainer" containerID="4e8b42fecfde496f6c6b1290b512311a344408571580bac4b6e2dd6a63a770cc" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.870102 4695 scope.go:117] "RemoveContainer" containerID="5c0eb75d07f93655baac364818a492a7fc447a82bb8e3dbc65049812e44edd8d" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.894810 4695 scope.go:117] "RemoveContainer" containerID="e2388237a821d99f67bfbb33aaf6d6c456ee0ac5670d300c389d6105726591b5" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.916209 4695 scope.go:117] "RemoveContainer" containerID="b135f9e3bb5cf0f6f018142fb9b30fa3e7ef58f6458e703f8285c728dc576975" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.934788 4695 scope.go:117] "RemoveContainer" containerID="bb340cb4226b6ec4bc5986974502f05bafbba9f1eefd12bb5e6ef6fc8f07958c" Nov 26 13:54:05 crc kubenswrapper[4695]: I1126 13:54:05.952839 4695 scope.go:117] "RemoveContainer" containerID="ccb276f1eff0de3d00f846752385ce7e0830484da77fcffb75dc76e9bc16af79" Nov 26 13:54:08 crc kubenswrapper[4695]: I1126 13:54:08.323255 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:54:08 crc kubenswrapper[4695]: E1126 13:54:08.323788 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:54:22 crc kubenswrapper[4695]: I1126 13:54:22.163030 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:54:22 crc kubenswrapper[4695]: E1126 13:54:22.163977 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:54:31 crc kubenswrapper[4695]: I1126 13:54:31.041595 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-jj9bb"] Nov 26 13:54:31 crc kubenswrapper[4695]: I1126 13:54:31.050945 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-jj9bb"] Nov 26 13:54:31 crc kubenswrapper[4695]: I1126 13:54:31.172474 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91bc2f0-eaf1-4a68-9135-af44285dd833" path="/var/lib/kubelet/pods/e91bc2f0-eaf1-4a68-9135-af44285dd833/volumes" Nov 26 13:54:33 crc kubenswrapper[4695]: I1126 13:54:33.162790 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:54:33 crc kubenswrapper[4695]: E1126 13:54:33.163369 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:54:42 crc kubenswrapper[4695]: I1126 13:54:42.042447 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-xktkp"] Nov 26 13:54:42 crc kubenswrapper[4695]: I1126 13:54:42.049885 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-xktkp"] Nov 26 13:54:42 crc kubenswrapper[4695]: I1126 13:54:42.690212 4695 generic.go:334] "Generic (PLEG): container finished" podID="2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41" containerID="f7294872e38c678a05afcfc81cbd0daa0f8ad0b346b64737c2491af67a27652d" exitCode=0 Nov 26 13:54:42 crc kubenswrapper[4695]: I1126 13:54:42.690458 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" event={"ID":"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41","Type":"ContainerDied","Data":"f7294872e38c678a05afcfc81cbd0daa0f8ad0b346b64737c2491af67a27652d"} Nov 26 13:54:43 crc kubenswrapper[4695]: I1126 13:54:43.180886 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12ef57bb-7a18-4350-a2b8-86efd6babbe0" path="/var/lib/kubelet/pods/12ef57bb-7a18-4350-a2b8-86efd6babbe0/volumes" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.140155 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.258961 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-ssh-key\") pod \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.259119 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-inventory\") pod \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.259220 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snx8m\" (UniqueName: \"kubernetes.io/projected/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-kube-api-access-snx8m\") pod \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\" (UID: \"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41\") " Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.269043 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-kube-api-access-snx8m" (OuterVolumeSpecName: "kube-api-access-snx8m") pod "2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41" (UID: "2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41"). InnerVolumeSpecName "kube-api-access-snx8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.288230 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41" (UID: "2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.288806 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-inventory" (OuterVolumeSpecName: "inventory") pod "2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41" (UID: "2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.362710 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.362769 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.362781 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snx8m\" (UniqueName: \"kubernetes.io/projected/2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41-kube-api-access-snx8m\") on node \"crc\" DevicePath \"\"" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.719906 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" event={"ID":"2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41","Type":"ContainerDied","Data":"75427f9220e405df0868a68d844ab224dc0348c0b7d25458dd748fe47ecab8ed"} Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.719966 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75427f9220e405df0868a68d844ab224dc0348c0b7d25458dd748fe47ecab8ed" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.720028 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-4rf24" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.808810 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2"] Nov 26 13:54:44 crc kubenswrapper[4695]: E1126 13:54:44.809380 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.809406 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.809632 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.810408 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.814584 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.814637 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.814783 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.815057 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.819042 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2"] Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.871989 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.872058 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.872425 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpwph\" (UniqueName: \"kubernetes.io/projected/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-kube-api-access-cpwph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.974796 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.974896 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.974988 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpwph\" (UniqueName: \"kubernetes.io/projected/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-kube-api-access-cpwph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.983714 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.984090 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:44 crc kubenswrapper[4695]: I1126 13:54:44.996550 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpwph\" (UniqueName: \"kubernetes.io/projected/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-kube-api-access-cpwph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:45 crc kubenswrapper[4695]: I1126 13:54:45.138745 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:54:45 crc kubenswrapper[4695]: I1126 13:54:45.686095 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2"] Nov 26 13:54:45 crc kubenswrapper[4695]: I1126 13:54:45.730050 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" event={"ID":"960d575b-5f75-45a2-8dbe-dd185d9dc0a0","Type":"ContainerStarted","Data":"86d3022905c99e1a2cb03cd9c66ebb07bf930a892b37b4beda17b6960f6e1afd"} Nov 26 13:54:46 crc kubenswrapper[4695]: I1126 13:54:46.162121 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:54:46 crc kubenswrapper[4695]: E1126 13:54:46.162742 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:54:46 crc kubenswrapper[4695]: I1126 13:54:46.739024 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" event={"ID":"960d575b-5f75-45a2-8dbe-dd185d9dc0a0","Type":"ContainerStarted","Data":"fc6f202e79846b1c9bf4beac01540e132371c69595ed099beb27b7026f6d4963"} Nov 26 13:54:46 crc kubenswrapper[4695]: I1126 13:54:46.753321 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" podStartSLOduration=2.121818978 podStartE2EDuration="2.753307379s" podCreationTimestamp="2025-11-26 13:54:44 +0000 UTC" firstStartedPulling="2025-11-26 13:54:45.691450887 +0000 UTC m=+1869.327275969" lastFinishedPulling="2025-11-26 13:54:46.322939298 +0000 UTC m=+1869.958764370" observedRunningTime="2025-11-26 13:54:46.752319486 +0000 UTC m=+1870.388144568" watchObservedRunningTime="2025-11-26 13:54:46.753307379 +0000 UTC m=+1870.389132461" Nov 26 13:54:57 crc kubenswrapper[4695]: I1126 13:54:57.168237 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:54:57 crc kubenswrapper[4695]: E1126 13:54:57.169082 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 13:55:01 crc kubenswrapper[4695]: I1126 13:55:01.035730 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cc28g"] Nov 26 13:55:01 crc kubenswrapper[4695]: I1126 13:55:01.043713 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cc28g"] Nov 26 13:55:01 crc kubenswrapper[4695]: I1126 13:55:01.181859 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c85245-6bcd-4580-a85f-51fa41122292" path="/var/lib/kubelet/pods/07c85245-6bcd-4580-a85f-51fa41122292/volumes" Nov 26 13:55:06 crc kubenswrapper[4695]: I1126 13:55:06.204940 4695 scope.go:117] "RemoveContainer" containerID="cadcc79f818d6323d2963027de9a3ea4e927c33c915b4c4c4ceede6dc3132e74" Nov 26 13:55:06 crc kubenswrapper[4695]: I1126 13:55:06.255050 4695 scope.go:117] "RemoveContainer" containerID="c5836f93af7c729db751b5363c438a5cca7867eb9aa3c91fa8795af36f4a29be" Nov 26 13:55:06 crc kubenswrapper[4695]: I1126 13:55:06.297876 4695 scope.go:117] "RemoveContainer" containerID="66ac5dc54a713aeecd987298b3a2dd52ca1778c57dc5ab0e6bb6a64c8ad8e22e" Nov 26 13:55:08 crc kubenswrapper[4695]: I1126 13:55:08.163233 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:55:08 crc kubenswrapper[4695]: I1126 13:55:08.937744 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"9054ffcb43e1cd20adf0a64a79d1ce9e9ce7c6483384269e4ffce8dac0186885"} Nov 26 13:55:14 crc kubenswrapper[4695]: I1126 13:55:14.049868 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-s5q69"] Nov 26 13:55:14 crc kubenswrapper[4695]: I1126 13:55:14.062193 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-s5q69"] Nov 26 13:55:15 crc kubenswrapper[4695]: I1126 13:55:15.031259 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-gs5xh"] Nov 26 13:55:15 crc kubenswrapper[4695]: I1126 13:55:15.043241 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5lf5p"] Nov 26 13:55:15 crc kubenswrapper[4695]: I1126 13:55:15.052159 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-gs5xh"] Nov 26 13:55:15 crc kubenswrapper[4695]: I1126 13:55:15.061232 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5lf5p"] Nov 26 13:55:15 crc kubenswrapper[4695]: I1126 13:55:15.175632 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322aceb8-cfb2-478e-a586-68c3f43b3977" path="/var/lib/kubelet/pods/322aceb8-cfb2-478e-a586-68c3f43b3977/volumes" Nov 26 13:55:15 crc kubenswrapper[4695]: I1126 13:55:15.176290 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e6d0467-d196-483d-a0af-c616fcffd987" path="/var/lib/kubelet/pods/4e6d0467-d196-483d-a0af-c616fcffd987/volumes" Nov 26 13:55:15 crc kubenswrapper[4695]: I1126 13:55:15.176922 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74573dd4-c899-4229-b940-e2f82063aa84" path="/var/lib/kubelet/pods/74573dd4-c899-4229-b940-e2f82063aa84/volumes" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.594737 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5r7c"] Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.598731 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.609529 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5r7c"] Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.625712 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-catalog-content\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.626012 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-utilities\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.626135 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkjdf\" (UniqueName: \"kubernetes.io/projected/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-kube-api-access-rkjdf\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.728585 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-catalog-content\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.728652 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-utilities\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.728706 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkjdf\" (UniqueName: \"kubernetes.io/projected/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-kube-api-access-rkjdf\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.729283 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-catalog-content\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.729334 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-utilities\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.755107 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkjdf\" (UniqueName: \"kubernetes.io/projected/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-kube-api-access-rkjdf\") pod \"redhat-operators-z5r7c\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:17 crc kubenswrapper[4695]: I1126 13:55:17.936688 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:18 crc kubenswrapper[4695]: I1126 13:55:18.408985 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5r7c"] Nov 26 13:55:19 crc kubenswrapper[4695]: I1126 13:55:19.021754 4695 generic.go:334] "Generic (PLEG): container finished" podID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerID="5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd" exitCode=0 Nov 26 13:55:19 crc kubenswrapper[4695]: I1126 13:55:19.022094 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r7c" event={"ID":"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9","Type":"ContainerDied","Data":"5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd"} Nov 26 13:55:19 crc kubenswrapper[4695]: I1126 13:55:19.022127 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r7c" event={"ID":"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9","Type":"ContainerStarted","Data":"d392df10ceaa0da7a2d90c8ec6449d3a9317288d8adc075e8a52737f488235c0"} Nov 26 13:55:20 crc kubenswrapper[4695]: I1126 13:55:20.032471 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r7c" event={"ID":"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9","Type":"ContainerStarted","Data":"6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e"} Nov 26 13:55:21 crc kubenswrapper[4695]: I1126 13:55:21.048740 4695 generic.go:334] "Generic (PLEG): container finished" podID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerID="6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e" exitCode=0 Nov 26 13:55:21 crc kubenswrapper[4695]: I1126 13:55:21.048887 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r7c" event={"ID":"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9","Type":"ContainerDied","Data":"6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e"} Nov 26 13:55:22 crc kubenswrapper[4695]: I1126 13:55:22.059639 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r7c" event={"ID":"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9","Type":"ContainerStarted","Data":"80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d"} Nov 26 13:55:22 crc kubenswrapper[4695]: I1126 13:55:22.080366 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5r7c" podStartSLOduration=2.415217075 podStartE2EDuration="5.080333695s" podCreationTimestamp="2025-11-26 13:55:17 +0000 UTC" firstStartedPulling="2025-11-26 13:55:19.023576458 +0000 UTC m=+1902.659401540" lastFinishedPulling="2025-11-26 13:55:21.688693068 +0000 UTC m=+1905.324518160" observedRunningTime="2025-11-26 13:55:22.078808218 +0000 UTC m=+1905.714633300" watchObservedRunningTime="2025-11-26 13:55:22.080333695 +0000 UTC m=+1905.716158777" Nov 26 13:55:27 crc kubenswrapper[4695]: I1126 13:55:27.937065 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:27 crc kubenswrapper[4695]: I1126 13:55:27.937664 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:27 crc kubenswrapper[4695]: I1126 13:55:27.982233 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:28 crc kubenswrapper[4695]: I1126 13:55:28.038604 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-zzx52"] Nov 26 13:55:28 crc kubenswrapper[4695]: I1126 13:55:28.046057 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-zzx52"] Nov 26 13:55:28 crc kubenswrapper[4695]: I1126 13:55:28.158574 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:28 crc kubenswrapper[4695]: I1126 13:55:28.213938 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5r7c"] Nov 26 13:55:29 crc kubenswrapper[4695]: I1126 13:55:29.172963 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6597360-8ab5-4bba-9137-fb4f57019c78" path="/var/lib/kubelet/pods/b6597360-8ab5-4bba-9137-fb4f57019c78/volumes" Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.126563 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z5r7c" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="registry-server" containerID="cri-o://80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d" gracePeriod=2 Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.590425 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.688773 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-utilities\") pod \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.688883 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkjdf\" (UniqueName: \"kubernetes.io/projected/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-kube-api-access-rkjdf\") pod \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.688914 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-catalog-content\") pod \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\" (UID: \"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9\") " Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.689594 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-utilities" (OuterVolumeSpecName: "utilities") pod "cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" (UID: "cc73a14c-28f8-4d5d-be86-9a93ff9c81d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.695852 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-kube-api-access-rkjdf" (OuterVolumeSpecName: "kube-api-access-rkjdf") pod "cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" (UID: "cc73a14c-28f8-4d5d-be86-9a93ff9c81d9"). InnerVolumeSpecName "kube-api-access-rkjdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.788929 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" (UID: "cc73a14c-28f8-4d5d-be86-9a93ff9c81d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.791098 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkjdf\" (UniqueName: \"kubernetes.io/projected/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-kube-api-access-rkjdf\") on node \"crc\" DevicePath \"\"" Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.791127 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:55:30 crc kubenswrapper[4695]: I1126 13:55:30.791137 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.138956 4695 generic.go:334] "Generic (PLEG): container finished" podID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerID="80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d" exitCode=0 Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.139006 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r7c" event={"ID":"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9","Type":"ContainerDied","Data":"80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d"} Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.139030 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5r7c" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.139057 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5r7c" event={"ID":"cc73a14c-28f8-4d5d-be86-9a93ff9c81d9","Type":"ContainerDied","Data":"d392df10ceaa0da7a2d90c8ec6449d3a9317288d8adc075e8a52737f488235c0"} Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.139077 4695 scope.go:117] "RemoveContainer" containerID="80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.184222 4695 scope.go:117] "RemoveContainer" containerID="6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.193157 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z5r7c"] Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.213678 4695 scope.go:117] "RemoveContainer" containerID="5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.215281 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z5r7c"] Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.261964 4695 scope.go:117] "RemoveContainer" containerID="80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d" Nov 26 13:55:31 crc kubenswrapper[4695]: E1126 13:55:31.262751 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d\": container with ID starting with 80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d not found: ID does not exist" containerID="80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.262788 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d"} err="failed to get container status \"80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d\": rpc error: code = NotFound desc = could not find container \"80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d\": container with ID starting with 80bb6dd78c707bf4e219dfd3ff6c991ec38552c783eed35451051123e3790f4d not found: ID does not exist" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.262816 4695 scope.go:117] "RemoveContainer" containerID="6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e" Nov 26 13:55:31 crc kubenswrapper[4695]: E1126 13:55:31.263501 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e\": container with ID starting with 6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e not found: ID does not exist" containerID="6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.263532 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e"} err="failed to get container status \"6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e\": rpc error: code = NotFound desc = could not find container \"6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e\": container with ID starting with 6f7c499d7fc9627730a66c802e9e9db8ca94c27647c97f6f5c95fc70da76233e not found: ID does not exist" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.263552 4695 scope.go:117] "RemoveContainer" containerID="5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd" Nov 26 13:55:31 crc kubenswrapper[4695]: E1126 13:55:31.263916 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd\": container with ID starting with 5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd not found: ID does not exist" containerID="5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd" Nov 26 13:55:31 crc kubenswrapper[4695]: I1126 13:55:31.263969 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd"} err="failed to get container status \"5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd\": rpc error: code = NotFound desc = could not find container \"5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd\": container with ID starting with 5e8e0820e9f0ce9a4e10146ebb22492813c34829acfb6777d09c626044be7ccd not found: ID does not exist" Nov 26 13:55:33 crc kubenswrapper[4695]: I1126 13:55:33.173240 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" path="/var/lib/kubelet/pods/cc73a14c-28f8-4d5d-be86-9a93ff9c81d9/volumes" Nov 26 13:55:56 crc kubenswrapper[4695]: I1126 13:55:56.379630 4695 generic.go:334] "Generic (PLEG): container finished" podID="960d575b-5f75-45a2-8dbe-dd185d9dc0a0" containerID="fc6f202e79846b1c9bf4beac01540e132371c69595ed099beb27b7026f6d4963" exitCode=0 Nov 26 13:55:56 crc kubenswrapper[4695]: I1126 13:55:56.379746 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" event={"ID":"960d575b-5f75-45a2-8dbe-dd185d9dc0a0","Type":"ContainerDied","Data":"fc6f202e79846b1c9bf4beac01540e132371c69595ed099beb27b7026f6d4963"} Nov 26 13:55:57 crc kubenswrapper[4695]: I1126 13:55:57.797884 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:55:57 crc kubenswrapper[4695]: I1126 13:55:57.932011 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpwph\" (UniqueName: \"kubernetes.io/projected/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-kube-api-access-cpwph\") pod \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " Nov 26 13:55:57 crc kubenswrapper[4695]: I1126 13:55:57.932269 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-ssh-key\") pod \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " Nov 26 13:55:57 crc kubenswrapper[4695]: I1126 13:55:57.932471 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-inventory\") pod \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\" (UID: \"960d575b-5f75-45a2-8dbe-dd185d9dc0a0\") " Nov 26 13:55:57 crc kubenswrapper[4695]: I1126 13:55:57.937425 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-kube-api-access-cpwph" (OuterVolumeSpecName: "kube-api-access-cpwph") pod "960d575b-5f75-45a2-8dbe-dd185d9dc0a0" (UID: "960d575b-5f75-45a2-8dbe-dd185d9dc0a0"). InnerVolumeSpecName "kube-api-access-cpwph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:55:57 crc kubenswrapper[4695]: I1126 13:55:57.958001 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-inventory" (OuterVolumeSpecName: "inventory") pod "960d575b-5f75-45a2-8dbe-dd185d9dc0a0" (UID: "960d575b-5f75-45a2-8dbe-dd185d9dc0a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:55:57 crc kubenswrapper[4695]: I1126 13:55:57.958388 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "960d575b-5f75-45a2-8dbe-dd185d9dc0a0" (UID: "960d575b-5f75-45a2-8dbe-dd185d9dc0a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.034110 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpwph\" (UniqueName: \"kubernetes.io/projected/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-kube-api-access-cpwph\") on node \"crc\" DevicePath \"\"" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.034153 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.034163 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/960d575b-5f75-45a2-8dbe-dd185d9dc0a0-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.401063 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" event={"ID":"960d575b-5f75-45a2-8dbe-dd185d9dc0a0","Type":"ContainerDied","Data":"86d3022905c99e1a2cb03cd9c66ebb07bf930a892b37b4beda17b6960f6e1afd"} Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.401426 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86d3022905c99e1a2cb03cd9c66ebb07bf930a892b37b4beda17b6960f6e1afd" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.401110 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.484896 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll"] Nov 26 13:55:58 crc kubenswrapper[4695]: E1126 13:55:58.485445 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="extract-content" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.485466 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="extract-content" Nov 26 13:55:58 crc kubenswrapper[4695]: E1126 13:55:58.485486 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="extract-utilities" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.485494 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="extract-utilities" Nov 26 13:55:58 crc kubenswrapper[4695]: E1126 13:55:58.485514 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960d575b-5f75-45a2-8dbe-dd185d9dc0a0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.485524 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="960d575b-5f75-45a2-8dbe-dd185d9dc0a0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 26 13:55:58 crc kubenswrapper[4695]: E1126 13:55:58.485556 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="registry-server" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.485564 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="registry-server" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.485784 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="960d575b-5f75-45a2-8dbe-dd185d9dc0a0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.485816 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc73a14c-28f8-4d5d-be86-9a93ff9c81d9" containerName="registry-server" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.486538 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.488459 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.489695 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.489978 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.490110 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.494761 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll"] Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.644895 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.645073 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.645123 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4ml\" (UniqueName: \"kubernetes.io/projected/354489f4-e2ae-4a52-8708-5c495c729662-kube-api-access-8m4ml\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.747099 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.747192 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.747247 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4ml\" (UniqueName: \"kubernetes.io/projected/354489f4-e2ae-4a52-8708-5c495c729662-kube-api-access-8m4ml\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.751221 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.751291 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.767828 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4ml\" (UniqueName: \"kubernetes.io/projected/354489f4-e2ae-4a52-8708-5c495c729662-kube-api-access-8m4ml\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jbnll\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:58 crc kubenswrapper[4695]: I1126 13:55:58.804417 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:55:59 crc kubenswrapper[4695]: I1126 13:55:59.281881 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll"] Nov 26 13:55:59 crc kubenswrapper[4695]: I1126 13:55:59.413398 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" event={"ID":"354489f4-e2ae-4a52-8708-5c495c729662","Type":"ContainerStarted","Data":"d24e7794f5774d947bba7ebd6fb9ab1d44e0214c76f1041ff3406232623046b0"} Nov 26 13:56:00 crc kubenswrapper[4695]: I1126 13:56:00.425522 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" event={"ID":"354489f4-e2ae-4a52-8708-5c495c729662","Type":"ContainerStarted","Data":"7aa43f35998ff4194a6a3b8cd02a1c27f3d5e7e26750ec3ae7e56eed0c90207a"} Nov 26 13:56:00 crc kubenswrapper[4695]: I1126 13:56:00.458072 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" podStartSLOduration=1.92048828 podStartE2EDuration="2.458050575s" podCreationTimestamp="2025-11-26 13:55:58 +0000 UTC" firstStartedPulling="2025-11-26 13:55:59.291853786 +0000 UTC m=+1942.927678878" lastFinishedPulling="2025-11-26 13:55:59.829416091 +0000 UTC m=+1943.465241173" observedRunningTime="2025-11-26 13:56:00.450477917 +0000 UTC m=+1944.086302999" watchObservedRunningTime="2025-11-26 13:56:00.458050575 +0000 UTC m=+1944.093875657" Nov 26 13:56:05 crc kubenswrapper[4695]: I1126 13:56:05.468306 4695 generic.go:334] "Generic (PLEG): container finished" podID="354489f4-e2ae-4a52-8708-5c495c729662" containerID="7aa43f35998ff4194a6a3b8cd02a1c27f3d5e7e26750ec3ae7e56eed0c90207a" exitCode=0 Nov 26 13:56:05 crc kubenswrapper[4695]: I1126 13:56:05.468419 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" event={"ID":"354489f4-e2ae-4a52-8708-5c495c729662","Type":"ContainerDied","Data":"7aa43f35998ff4194a6a3b8cd02a1c27f3d5e7e26750ec3ae7e56eed0c90207a"} Nov 26 13:56:06 crc kubenswrapper[4695]: I1126 13:56:06.413338 4695 scope.go:117] "RemoveContainer" containerID="73bcfc8a9ba792371d225d90aaab498137bf2fa8860559fdb11e2e86166be2ab" Nov 26 13:56:06 crc kubenswrapper[4695]: I1126 13:56:06.465560 4695 scope.go:117] "RemoveContainer" containerID="662f56bbccfddd0f417d9f1c5903b27aee718f2a3f903fe268393d3edee6598e" Nov 26 13:56:06 crc kubenswrapper[4695]: I1126 13:56:06.532554 4695 scope.go:117] "RemoveContainer" containerID="d7964da8f13d02f415806519a7fa08d01150a9fc7b8a57d489bc022ef55b8fb6" Nov 26 13:56:06 crc kubenswrapper[4695]: I1126 13:56:06.573063 4695 scope.go:117] "RemoveContainer" containerID="1e5cb882573728f88557512418f95b54b847dccc4aa2f5b2ac0d528ee616bf5c" Nov 26 13:56:06 crc kubenswrapper[4695]: I1126 13:56:06.884999 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.014204 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8m4ml\" (UniqueName: \"kubernetes.io/projected/354489f4-e2ae-4a52-8708-5c495c729662-kube-api-access-8m4ml\") pod \"354489f4-e2ae-4a52-8708-5c495c729662\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.014429 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-inventory\") pod \"354489f4-e2ae-4a52-8708-5c495c729662\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.014564 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-ssh-key\") pod \"354489f4-e2ae-4a52-8708-5c495c729662\" (UID: \"354489f4-e2ae-4a52-8708-5c495c729662\") " Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.023089 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/354489f4-e2ae-4a52-8708-5c495c729662-kube-api-access-8m4ml" (OuterVolumeSpecName: "kube-api-access-8m4ml") pod "354489f4-e2ae-4a52-8708-5c495c729662" (UID: "354489f4-e2ae-4a52-8708-5c495c729662"). InnerVolumeSpecName "kube-api-access-8m4ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.047917 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "354489f4-e2ae-4a52-8708-5c495c729662" (UID: "354489f4-e2ae-4a52-8708-5c495c729662"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.047957 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-inventory" (OuterVolumeSpecName: "inventory") pod "354489f4-e2ae-4a52-8708-5c495c729662" (UID: "354489f4-e2ae-4a52-8708-5c495c729662"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.117442 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8m4ml\" (UniqueName: \"kubernetes.io/projected/354489f4-e2ae-4a52-8708-5c495c729662-kube-api-access-8m4ml\") on node \"crc\" DevicePath \"\"" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.117966 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.117985 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/354489f4-e2ae-4a52-8708-5c495c729662-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.492791 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" event={"ID":"354489f4-e2ae-4a52-8708-5c495c729662","Type":"ContainerDied","Data":"d24e7794f5774d947bba7ebd6fb9ab1d44e0214c76f1041ff3406232623046b0"} Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.492833 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24e7794f5774d947bba7ebd6fb9ab1d44e0214c76f1041ff3406232623046b0" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.493068 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jbnll" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.566821 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l"] Nov 26 13:56:07 crc kubenswrapper[4695]: E1126 13:56:07.567530 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354489f4-e2ae-4a52-8708-5c495c729662" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.567548 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="354489f4-e2ae-4a52-8708-5c495c729662" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.567786 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="354489f4-e2ae-4a52-8708-5c495c729662" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.568702 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.571164 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.571562 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.571861 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.582870 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l"] Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.587573 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.729713 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.729779 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.730757 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt9xl\" (UniqueName: \"kubernetes.io/projected/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-kube-api-access-zt9xl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.833260 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.833495 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zt9xl\" (UniqueName: \"kubernetes.io/projected/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-kube-api-access-zt9xl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.833534 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.842601 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.847431 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.851574 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zt9xl\" (UniqueName: \"kubernetes.io/projected/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-kube-api-access-zt9xl\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-ljn7l\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:07 crc kubenswrapper[4695]: I1126 13:56:07.892323 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:08 crc kubenswrapper[4695]: I1126 13:56:08.480999 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l"] Nov 26 13:56:08 crc kubenswrapper[4695]: I1126 13:56:08.504987 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" event={"ID":"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5","Type":"ContainerStarted","Data":"2521db775d7ac621864de17ed51d40dde109d4ed8fd11f6ad075212f54a8dc46"} Nov 26 13:56:09 crc kubenswrapper[4695]: I1126 13:56:09.536187 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" event={"ID":"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5","Type":"ContainerStarted","Data":"9a7f5dcf505f6f086518db0ad2664a0f12680bd60df0f3feec771bb898c41e43"} Nov 26 13:56:09 crc kubenswrapper[4695]: I1126 13:56:09.552605 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" podStartSLOduration=2.127309827 podStartE2EDuration="2.552588617s" podCreationTimestamp="2025-11-26 13:56:07 +0000 UTC" firstStartedPulling="2025-11-26 13:56:08.486185762 +0000 UTC m=+1952.122010844" lastFinishedPulling="2025-11-26 13:56:08.911464512 +0000 UTC m=+1952.547289634" observedRunningTime="2025-11-26 13:56:09.552429602 +0000 UTC m=+1953.188254684" watchObservedRunningTime="2025-11-26 13:56:09.552588617 +0000 UTC m=+1953.188413689" Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.054590 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-q4zxj"] Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.063271 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-1741-account-create-update-fx2z9"] Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.071362 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-bkzwl"] Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.078535 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-bkzwl"] Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.087988 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-1741-account-create-update-fx2z9"] Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.096148 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-q4zxj"] Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.175241 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b003921-6331-4569-9771-82f7ccc92f84" path="/var/lib/kubelet/pods/9b003921-6331-4569-9771-82f7ccc92f84/volumes" Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.176070 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476" path="/var/lib/kubelet/pods/b7cfb3bd-9fbd-4cf3-a7d9-5dc717549476/volumes" Nov 26 13:56:17 crc kubenswrapper[4695]: I1126 13:56:17.176707 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb77d838-bf2d-4351-af21-b039e7fa1089" path="/var/lib/kubelet/pods/cb77d838-bf2d-4351-af21-b039e7fa1089/volumes" Nov 26 13:56:18 crc kubenswrapper[4695]: I1126 13:56:18.046151 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-tjhp9"] Nov 26 13:56:18 crc kubenswrapper[4695]: I1126 13:56:18.056870 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-1195-account-create-update-cft27"] Nov 26 13:56:18 crc kubenswrapper[4695]: I1126 13:56:18.068023 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-5034-account-create-update-pfg6h"] Nov 26 13:56:18 crc kubenswrapper[4695]: I1126 13:56:18.078252 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-tjhp9"] Nov 26 13:56:18 crc kubenswrapper[4695]: I1126 13:56:18.088504 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-1195-account-create-update-cft27"] Nov 26 13:56:18 crc kubenswrapper[4695]: I1126 13:56:18.098023 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-5034-account-create-update-pfg6h"] Nov 26 13:56:19 crc kubenswrapper[4695]: I1126 13:56:19.176157 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9db49627-d2bf-49a8-889c-fa076d76db84" path="/var/lib/kubelet/pods/9db49627-d2bf-49a8-889c-fa076d76db84/volumes" Nov 26 13:56:19 crc kubenswrapper[4695]: I1126 13:56:19.177646 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56" path="/var/lib/kubelet/pods/9e687a5f-d4b0-4d38-8bae-4f4bc46a2d56/volumes" Nov 26 13:56:19 crc kubenswrapper[4695]: I1126 13:56:19.178318 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fa32a3-4bbb-432d-a986-920762192742" path="/var/lib/kubelet/pods/f4fa32a3-4bbb-432d-a986-920762192742/volumes" Nov 26 13:56:46 crc kubenswrapper[4695]: I1126 13:56:46.916597 4695 generic.go:334] "Generic (PLEG): container finished" podID="87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5" containerID="9a7f5dcf505f6f086518db0ad2664a0f12680bd60df0f3feec771bb898c41e43" exitCode=0 Nov 26 13:56:46 crc kubenswrapper[4695]: I1126 13:56:46.916694 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" event={"ID":"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5","Type":"ContainerDied","Data":"9a7f5dcf505f6f086518db0ad2664a0f12680bd60df0f3feec771bb898c41e43"} Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.056521 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7vjhl"] Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.072562 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7vjhl"] Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.380858 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.546952 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-inventory\") pod \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.547174 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zt9xl\" (UniqueName: \"kubernetes.io/projected/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-kube-api-access-zt9xl\") pod \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.547201 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-ssh-key\") pod \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\" (UID: \"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5\") " Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.554105 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-kube-api-access-zt9xl" (OuterVolumeSpecName: "kube-api-access-zt9xl") pod "87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5" (UID: "87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5"). InnerVolumeSpecName "kube-api-access-zt9xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.578768 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5" (UID: "87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.580253 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-inventory" (OuterVolumeSpecName: "inventory") pod "87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5" (UID: "87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.649779 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.649847 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zt9xl\" (UniqueName: \"kubernetes.io/projected/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-kube-api-access-zt9xl\") on node \"crc\" DevicePath \"\"" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.649862 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.937441 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" event={"ID":"87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5","Type":"ContainerDied","Data":"2521db775d7ac621864de17ed51d40dde109d4ed8fd11f6ad075212f54a8dc46"} Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.937499 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2521db775d7ac621864de17ed51d40dde109d4ed8fd11f6ad075212f54a8dc46" Nov 26 13:56:48 crc kubenswrapper[4695]: I1126 13:56:48.937594 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-ljn7l" Nov 26 13:56:49 crc kubenswrapper[4695]: E1126 13:56:49.010170 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87321f9e_8ed5_40a0_bf49_f5e8c63ba2e5.slice\": RecentStats: unable to find data in memory cache]" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.041045 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc"] Nov 26 13:56:49 crc kubenswrapper[4695]: E1126 13:56:49.042373 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.042404 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.042647 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.043718 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.050281 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.050535 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.050665 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.050814 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.052299 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc"] Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.061159 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7f8k\" (UniqueName: \"kubernetes.io/projected/168bfff7-248e-4717-beac-8f7986a5d31e-kube-api-access-p7f8k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.061232 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.061265 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.163668 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7f8k\" (UniqueName: \"kubernetes.io/projected/168bfff7-248e-4717-beac-8f7986a5d31e-kube-api-access-p7f8k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.163847 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.164064 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.173304 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.173709 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.176074 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="056cfe2f-f427-4ab5-b43d-eac297c8cbd8" path="/var/lib/kubelet/pods/056cfe2f-f427-4ab5-b43d-eac297c8cbd8/volumes" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.198076 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7f8k\" (UniqueName: \"kubernetes.io/projected/168bfff7-248e-4717-beac-8f7986a5d31e-kube-api-access-p7f8k\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.369455 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.933044 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc"] Nov 26 13:56:49 crc kubenswrapper[4695]: I1126 13:56:49.947286 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" event={"ID":"168bfff7-248e-4717-beac-8f7986a5d31e","Type":"ContainerStarted","Data":"e9ebaf18c7296e167834dffc0a5b4bc5a408b4bef9cda4bc4d2a27df165d5806"} Nov 26 13:56:50 crc kubenswrapper[4695]: I1126 13:56:50.959201 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" event={"ID":"168bfff7-248e-4717-beac-8f7986a5d31e","Type":"ContainerStarted","Data":"f9d795cfbf6fd0a657ab4d83351151c0a68cbcb0eb16249ffcdf157afa43d588"} Nov 26 13:56:50 crc kubenswrapper[4695]: I1126 13:56:50.979219 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" podStartSLOduration=1.220778153 podStartE2EDuration="1.979197336s" podCreationTimestamp="2025-11-26 13:56:49 +0000 UTC" firstStartedPulling="2025-11-26 13:56:49.930264933 +0000 UTC m=+1993.566090015" lastFinishedPulling="2025-11-26 13:56:50.688684116 +0000 UTC m=+1994.324509198" observedRunningTime="2025-11-26 13:56:50.971768375 +0000 UTC m=+1994.607593457" watchObservedRunningTime="2025-11-26 13:56:50.979197336 +0000 UTC m=+1994.615022418" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.848545 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hpbwt"] Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.851537 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.871428 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpbwt"] Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.890275 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qd5l\" (UniqueName: \"kubernetes.io/projected/05b599a6-e2d7-4785-a198-20429bef96ca-kube-api-access-8qd5l\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.890518 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-catalog-content\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.890560 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-utilities\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.991815 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qd5l\" (UniqueName: \"kubernetes.io/projected/05b599a6-e2d7-4785-a198-20429bef96ca-kube-api-access-8qd5l\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.991915 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-catalog-content\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.991938 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-utilities\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.992405 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-utilities\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:54 crc kubenswrapper[4695]: I1126 13:56:54.992480 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-catalog-content\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:55 crc kubenswrapper[4695]: I1126 13:56:55.022779 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qd5l\" (UniqueName: \"kubernetes.io/projected/05b599a6-e2d7-4785-a198-20429bef96ca-kube-api-access-8qd5l\") pod \"redhat-marketplace-hpbwt\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:55 crc kubenswrapper[4695]: I1126 13:56:55.181927 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:56:55 crc kubenswrapper[4695]: I1126 13:56:55.673119 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpbwt"] Nov 26 13:56:56 crc kubenswrapper[4695]: I1126 13:56:56.007720 4695 generic.go:334] "Generic (PLEG): container finished" podID="05b599a6-e2d7-4785-a198-20429bef96ca" containerID="41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34" exitCode=0 Nov 26 13:56:56 crc kubenswrapper[4695]: I1126 13:56:56.007797 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpbwt" event={"ID":"05b599a6-e2d7-4785-a198-20429bef96ca","Type":"ContainerDied","Data":"41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34"} Nov 26 13:56:56 crc kubenswrapper[4695]: I1126 13:56:56.007861 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpbwt" event={"ID":"05b599a6-e2d7-4785-a198-20429bef96ca","Type":"ContainerStarted","Data":"ace43d109078a4d642aa6a229dfdc7f675cbf6e4df3f29ac8753e8701b0b93e4"} Nov 26 13:56:58 crc kubenswrapper[4695]: I1126 13:56:58.029659 4695 generic.go:334] "Generic (PLEG): container finished" podID="05b599a6-e2d7-4785-a198-20429bef96ca" containerID="1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa" exitCode=0 Nov 26 13:56:58 crc kubenswrapper[4695]: I1126 13:56:58.029746 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpbwt" event={"ID":"05b599a6-e2d7-4785-a198-20429bef96ca","Type":"ContainerDied","Data":"1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa"} Nov 26 13:56:59 crc kubenswrapper[4695]: I1126 13:56:59.043785 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpbwt" event={"ID":"05b599a6-e2d7-4785-a198-20429bef96ca","Type":"ContainerStarted","Data":"9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07"} Nov 26 13:56:59 crc kubenswrapper[4695]: I1126 13:56:59.067657 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hpbwt" podStartSLOduration=2.48885621 podStartE2EDuration="5.067636282s" podCreationTimestamp="2025-11-26 13:56:54 +0000 UTC" firstStartedPulling="2025-11-26 13:56:56.009842153 +0000 UTC m=+1999.645667235" lastFinishedPulling="2025-11-26 13:56:58.588622215 +0000 UTC m=+2002.224447307" observedRunningTime="2025-11-26 13:56:59.06144482 +0000 UTC m=+2002.697269902" watchObservedRunningTime="2025-11-26 13:56:59.067636282 +0000 UTC m=+2002.703461364" Nov 26 13:57:05 crc kubenswrapper[4695]: I1126 13:57:05.183204 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:57:05 crc kubenswrapper[4695]: I1126 13:57:05.183875 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:57:05 crc kubenswrapper[4695]: I1126 13:57:05.225021 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:57:06 crc kubenswrapper[4695]: I1126 13:57:06.141651 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:57:06 crc kubenswrapper[4695]: I1126 13:57:06.186366 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpbwt"] Nov 26 13:57:06 crc kubenswrapper[4695]: I1126 13:57:06.778196 4695 scope.go:117] "RemoveContainer" containerID="0a529c087cc2accf72e8a5578df62155bd33c7d9f5a1ea00adb4d7a480b5904a" Nov 26 13:57:06 crc kubenswrapper[4695]: I1126 13:57:06.801712 4695 scope.go:117] "RemoveContainer" containerID="6b2185c381c33a81e989ace298ff8b1f51f062a5e9e9f0ffee659f9ea071b8eb" Nov 26 13:57:06 crc kubenswrapper[4695]: I1126 13:57:06.846704 4695 scope.go:117] "RemoveContainer" containerID="c9926f27a71f1f404324a49a20230d8fa967b4dff86b1773449f12e230a469bc" Nov 26 13:57:06 crc kubenswrapper[4695]: I1126 13:57:06.908425 4695 scope.go:117] "RemoveContainer" containerID="ac6b7ea9683b3fe84dcc23a585e6fe76e55b4f41b72c66db6c6a8a75c4b0fff0" Nov 26 13:57:06 crc kubenswrapper[4695]: I1126 13:57:06.957728 4695 scope.go:117] "RemoveContainer" containerID="c51cbe4d35500cc0e89eff5a8b423328b4429064abf8b76f1d29b2c54500d29f" Nov 26 13:57:07 crc kubenswrapper[4695]: I1126 13:57:07.010750 4695 scope.go:117] "RemoveContainer" containerID="e7118dbebb9d508125bb3d9e5c851b89e9fb9c3c8a32846337bad8ca0fb3eb2c" Nov 26 13:57:07 crc kubenswrapper[4695]: I1126 13:57:07.070564 4695 scope.go:117] "RemoveContainer" containerID="0e97052315eb2017ae3bff9436b94da22b00134eab97e4b2d88dca6c12e4adde" Nov 26 13:57:08 crc kubenswrapper[4695]: I1126 13:57:08.129290 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hpbwt" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="registry-server" containerID="cri-o://9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07" gracePeriod=2 Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.057633 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.139526 4695 generic.go:334] "Generic (PLEG): container finished" podID="05b599a6-e2d7-4785-a198-20429bef96ca" containerID="9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07" exitCode=0 Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.139591 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hpbwt" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.139574 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpbwt" event={"ID":"05b599a6-e2d7-4785-a198-20429bef96ca","Type":"ContainerDied","Data":"9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07"} Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.139720 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hpbwt" event={"ID":"05b599a6-e2d7-4785-a198-20429bef96ca","Type":"ContainerDied","Data":"ace43d109078a4d642aa6a229dfdc7f675cbf6e4df3f29ac8753e8701b0b93e4"} Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.139746 4695 scope.go:117] "RemoveContainer" containerID="9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.152120 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-catalog-content\") pod \"05b599a6-e2d7-4785-a198-20429bef96ca\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.152217 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qd5l\" (UniqueName: \"kubernetes.io/projected/05b599a6-e2d7-4785-a198-20429bef96ca-kube-api-access-8qd5l\") pod \"05b599a6-e2d7-4785-a198-20429bef96ca\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.152532 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-utilities\") pod \"05b599a6-e2d7-4785-a198-20429bef96ca\" (UID: \"05b599a6-e2d7-4785-a198-20429bef96ca\") " Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.154066 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-utilities" (OuterVolumeSpecName: "utilities") pod "05b599a6-e2d7-4785-a198-20429bef96ca" (UID: "05b599a6-e2d7-4785-a198-20429bef96ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.162665 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b599a6-e2d7-4785-a198-20429bef96ca-kube-api-access-8qd5l" (OuterVolumeSpecName: "kube-api-access-8qd5l") pod "05b599a6-e2d7-4785-a198-20429bef96ca" (UID: "05b599a6-e2d7-4785-a198-20429bef96ca"). InnerVolumeSpecName "kube-api-access-8qd5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.167249 4695 scope.go:117] "RemoveContainer" containerID="1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.171617 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05b599a6-e2d7-4785-a198-20429bef96ca" (UID: "05b599a6-e2d7-4785-a198-20429bef96ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.214924 4695 scope.go:117] "RemoveContainer" containerID="41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.254917 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.255209 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qd5l\" (UniqueName: \"kubernetes.io/projected/05b599a6-e2d7-4785-a198-20429bef96ca-kube-api-access-8qd5l\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.255221 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b599a6-e2d7-4785-a198-20429bef96ca-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.261128 4695 scope.go:117] "RemoveContainer" containerID="9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07" Nov 26 13:57:09 crc kubenswrapper[4695]: E1126 13:57:09.261766 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07\": container with ID starting with 9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07 not found: ID does not exist" containerID="9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.261810 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07"} err="failed to get container status \"9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07\": rpc error: code = NotFound desc = could not find container \"9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07\": container with ID starting with 9860b82dd7627196363812118d9e9a48b7019e5c3fffa05f4b04aa59cdcb5f07 not found: ID does not exist" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.261841 4695 scope.go:117] "RemoveContainer" containerID="1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa" Nov 26 13:57:09 crc kubenswrapper[4695]: E1126 13:57:09.262147 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa\": container with ID starting with 1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa not found: ID does not exist" containerID="1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.262174 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa"} err="failed to get container status \"1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa\": rpc error: code = NotFound desc = could not find container \"1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa\": container with ID starting with 1d7ab26c593a51e175907b527e8ad56a872e5b00201540934412fb39f24c9daa not found: ID does not exist" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.262192 4695 scope.go:117] "RemoveContainer" containerID="41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34" Nov 26 13:57:09 crc kubenswrapper[4695]: E1126 13:57:09.263479 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34\": container with ID starting with 41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34 not found: ID does not exist" containerID="41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.263557 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34"} err="failed to get container status \"41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34\": rpc error: code = NotFound desc = could not find container \"41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34\": container with ID starting with 41d4dd2bf88f9c4d55fe862082a66ad6c43d4b5e85b06dc643c46c43d11c1e34 not found: ID does not exist" Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.462686 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpbwt"] Nov 26 13:57:09 crc kubenswrapper[4695]: I1126 13:57:09.471851 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hpbwt"] Nov 26 13:57:10 crc kubenswrapper[4695]: I1126 13:57:10.038890 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-fd5l2"] Nov 26 13:57:10 crc kubenswrapper[4695]: I1126 13:57:10.052111 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-fd5l2"] Nov 26 13:57:11 crc kubenswrapper[4695]: I1126 13:57:11.173817 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" path="/var/lib/kubelet/pods/05b599a6-e2d7-4785-a198-20429bef96ca/volumes" Nov 26 13:57:11 crc kubenswrapper[4695]: I1126 13:57:11.174858 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="738dcd55-7a7f-4281-a6c2-a49de39a161e" path="/var/lib/kubelet/pods/738dcd55-7a7f-4281-a6c2-a49de39a161e/volumes" Nov 26 13:57:12 crc kubenswrapper[4695]: I1126 13:57:12.046098 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wqfd2"] Nov 26 13:57:12 crc kubenswrapper[4695]: I1126 13:57:12.054483 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-wqfd2"] Nov 26 13:57:13 crc kubenswrapper[4695]: I1126 13:57:13.175615 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f29eebab-4cee-46a2-980a-49feebce3198" path="/var/lib/kubelet/pods/f29eebab-4cee-46a2-980a-49feebce3198/volumes" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.480551 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n4gcs"] Nov 26 13:57:15 crc kubenswrapper[4695]: E1126 13:57:15.480940 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="extract-utilities" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.480952 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="extract-utilities" Nov 26 13:57:15 crc kubenswrapper[4695]: E1126 13:57:15.480971 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="registry-server" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.480977 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="registry-server" Nov 26 13:57:15 crc kubenswrapper[4695]: E1126 13:57:15.480994 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="extract-content" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.481000 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="extract-content" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.481184 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b599a6-e2d7-4785-a198-20429bef96ca" containerName="registry-server" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.482802 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.496013 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4gcs"] Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.584098 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-catalog-content\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.584490 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-utilities\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.584604 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb6d9\" (UniqueName: \"kubernetes.io/projected/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-kube-api-access-rb6d9\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.686029 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-utilities\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.686105 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb6d9\" (UniqueName: \"kubernetes.io/projected/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-kube-api-access-rb6d9\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.686172 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-catalog-content\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.686637 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-utilities\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.686692 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-catalog-content\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.709195 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb6d9\" (UniqueName: \"kubernetes.io/projected/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-kube-api-access-rb6d9\") pod \"community-operators-n4gcs\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:15 crc kubenswrapper[4695]: I1126 13:57:15.807276 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:16 crc kubenswrapper[4695]: I1126 13:57:16.281861 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n4gcs"] Nov 26 13:57:17 crc kubenswrapper[4695]: I1126 13:57:17.221904 4695 generic.go:334] "Generic (PLEG): container finished" podID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerID="43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87" exitCode=0 Nov 26 13:57:17 crc kubenswrapper[4695]: I1126 13:57:17.222079 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4gcs" event={"ID":"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e","Type":"ContainerDied","Data":"43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87"} Nov 26 13:57:17 crc kubenswrapper[4695]: I1126 13:57:17.223630 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4gcs" event={"ID":"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e","Type":"ContainerStarted","Data":"babd4e9955f14cd422d7f763fa6e3eb70a8fa6c53011439abf2f6f1faa09691b"} Nov 26 13:57:19 crc kubenswrapper[4695]: I1126 13:57:19.240174 4695 generic.go:334] "Generic (PLEG): container finished" podID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerID="8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87" exitCode=0 Nov 26 13:57:19 crc kubenswrapper[4695]: I1126 13:57:19.240217 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4gcs" event={"ID":"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e","Type":"ContainerDied","Data":"8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87"} Nov 26 13:57:20 crc kubenswrapper[4695]: I1126 13:57:20.251279 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4gcs" event={"ID":"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e","Type":"ContainerStarted","Data":"80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7"} Nov 26 13:57:20 crc kubenswrapper[4695]: I1126 13:57:20.279379 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n4gcs" podStartSLOduration=2.597635537 podStartE2EDuration="5.27933853s" podCreationTimestamp="2025-11-26 13:57:15 +0000 UTC" firstStartedPulling="2025-11-26 13:57:17.223295783 +0000 UTC m=+2020.859120875" lastFinishedPulling="2025-11-26 13:57:19.904998796 +0000 UTC m=+2023.540823868" observedRunningTime="2025-11-26 13:57:20.273582759 +0000 UTC m=+2023.909407841" watchObservedRunningTime="2025-11-26 13:57:20.27933853 +0000 UTC m=+2023.915163622" Nov 26 13:57:25 crc kubenswrapper[4695]: I1126 13:57:25.809022 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:25 crc kubenswrapper[4695]: I1126 13:57:25.809828 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:25 crc kubenswrapper[4695]: I1126 13:57:25.859899 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:26 crc kubenswrapper[4695]: I1126 13:57:26.348679 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:26 crc kubenswrapper[4695]: I1126 13:57:26.395243 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4gcs"] Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.332743 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n4gcs" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="registry-server" containerID="cri-o://80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7" gracePeriod=2 Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.764874 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.865549 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-catalog-content\") pod \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.865630 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb6d9\" (UniqueName: \"kubernetes.io/projected/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-kube-api-access-rb6d9\") pod \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.865682 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-utilities\") pod \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\" (UID: \"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e\") " Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.866716 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-utilities" (OuterVolumeSpecName: "utilities") pod "f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" (UID: "f3092a1a-3bfa-4df2-b10d-cc4e6b73389e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.870790 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-kube-api-access-rb6d9" (OuterVolumeSpecName: "kube-api-access-rb6d9") pod "f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" (UID: "f3092a1a-3bfa-4df2-b10d-cc4e6b73389e"). InnerVolumeSpecName "kube-api-access-rb6d9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.924162 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" (UID: "f3092a1a-3bfa-4df2-b10d-cc4e6b73389e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.974327 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.974541 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb6d9\" (UniqueName: \"kubernetes.io/projected/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-kube-api-access-rb6d9\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:28 crc kubenswrapper[4695]: I1126 13:57:28.974568 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.343773 4695 generic.go:334] "Generic (PLEG): container finished" podID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerID="80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7" exitCode=0 Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.343816 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4gcs" event={"ID":"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e","Type":"ContainerDied","Data":"80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7"} Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.343851 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n4gcs" event={"ID":"f3092a1a-3bfa-4df2-b10d-cc4e6b73389e","Type":"ContainerDied","Data":"babd4e9955f14cd422d7f763fa6e3eb70a8fa6c53011439abf2f6f1faa09691b"} Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.343849 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n4gcs" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.343867 4695 scope.go:117] "RemoveContainer" containerID="80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.366312 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n4gcs"] Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.375432 4695 scope.go:117] "RemoveContainer" containerID="8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.381503 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n4gcs"] Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.407159 4695 scope.go:117] "RemoveContainer" containerID="43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.443804 4695 scope.go:117] "RemoveContainer" containerID="80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7" Nov 26 13:57:29 crc kubenswrapper[4695]: E1126 13:57:29.444429 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7\": container with ID starting with 80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7 not found: ID does not exist" containerID="80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.444469 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7"} err="failed to get container status \"80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7\": rpc error: code = NotFound desc = could not find container \"80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7\": container with ID starting with 80aa26ff04d117e67e67ea0d5d2e8d652a36774ca855f0e6f0c22c062bab4ee7 not found: ID does not exist" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.444507 4695 scope.go:117] "RemoveContainer" containerID="8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87" Nov 26 13:57:29 crc kubenswrapper[4695]: E1126 13:57:29.444717 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87\": container with ID starting with 8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87 not found: ID does not exist" containerID="8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.444739 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87"} err="failed to get container status \"8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87\": rpc error: code = NotFound desc = could not find container \"8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87\": container with ID starting with 8de1c1329d51245cdf1fed2fe8726e0637477c0d1535954461b639bc62594a87 not found: ID does not exist" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.444768 4695 scope.go:117] "RemoveContainer" containerID="43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87" Nov 26 13:57:29 crc kubenswrapper[4695]: E1126 13:57:29.444955 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87\": container with ID starting with 43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87 not found: ID does not exist" containerID="43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87" Nov 26 13:57:29 crc kubenswrapper[4695]: I1126 13:57:29.444978 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87"} err="failed to get container status \"43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87\": rpc error: code = NotFound desc = could not find container \"43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87\": container with ID starting with 43e6de57352711403d502fdd2bf70e652ab605a26014c10e01c78ac175ff4e87 not found: ID does not exist" Nov 26 13:57:31 crc kubenswrapper[4695]: I1126 13:57:31.173330 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" path="/var/lib/kubelet/pods/f3092a1a-3bfa-4df2-b10d-cc4e6b73389e/volumes" Nov 26 13:57:36 crc kubenswrapper[4695]: I1126 13:57:36.396848 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:57:36 crc kubenswrapper[4695]: I1126 13:57:36.397590 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:57:39 crc kubenswrapper[4695]: I1126 13:57:39.460719 4695 generic.go:334] "Generic (PLEG): container finished" podID="168bfff7-248e-4717-beac-8f7986a5d31e" containerID="f9d795cfbf6fd0a657ab4d83351151c0a68cbcb0eb16249ffcdf157afa43d588" exitCode=0 Nov 26 13:57:39 crc kubenswrapper[4695]: I1126 13:57:39.460836 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" event={"ID":"168bfff7-248e-4717-beac-8f7986a5d31e","Type":"ContainerDied","Data":"f9d795cfbf6fd0a657ab4d83351151c0a68cbcb0eb16249ffcdf157afa43d588"} Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.864155 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.892302 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-inventory\") pod \"168bfff7-248e-4717-beac-8f7986a5d31e\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.892417 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7f8k\" (UniqueName: \"kubernetes.io/projected/168bfff7-248e-4717-beac-8f7986a5d31e-kube-api-access-p7f8k\") pod \"168bfff7-248e-4717-beac-8f7986a5d31e\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.892751 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-ssh-key\") pod \"168bfff7-248e-4717-beac-8f7986a5d31e\" (UID: \"168bfff7-248e-4717-beac-8f7986a5d31e\") " Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.902722 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/168bfff7-248e-4717-beac-8f7986a5d31e-kube-api-access-p7f8k" (OuterVolumeSpecName: "kube-api-access-p7f8k") pod "168bfff7-248e-4717-beac-8f7986a5d31e" (UID: "168bfff7-248e-4717-beac-8f7986a5d31e"). InnerVolumeSpecName "kube-api-access-p7f8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.920968 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "168bfff7-248e-4717-beac-8f7986a5d31e" (UID: "168bfff7-248e-4717-beac-8f7986a5d31e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.923738 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-inventory" (OuterVolumeSpecName: "inventory") pod "168bfff7-248e-4717-beac-8f7986a5d31e" (UID: "168bfff7-248e-4717-beac-8f7986a5d31e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.996619 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.996654 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7f8k\" (UniqueName: \"kubernetes.io/projected/168bfff7-248e-4717-beac-8f7986a5d31e-kube-api-access-p7f8k\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:40 crc kubenswrapper[4695]: I1126 13:57:40.996665 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/168bfff7-248e-4717-beac-8f7986a5d31e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.478866 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.478793 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc" event={"ID":"168bfff7-248e-4717-beac-8f7986a5d31e","Type":"ContainerDied","Data":"e9ebaf18c7296e167834dffc0a5b4bc5a408b4bef9cda4bc4d2a27df165d5806"} Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.479192 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9ebaf18c7296e167834dffc0a5b4bc5a408b4bef9cda4bc4d2a27df165d5806" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.559625 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-76hkl"] Nov 26 13:57:41 crc kubenswrapper[4695]: E1126 13:57:41.560064 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="extract-utilities" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.560083 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="extract-utilities" Nov 26 13:57:41 crc kubenswrapper[4695]: E1126 13:57:41.560097 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="168bfff7-248e-4717-beac-8f7986a5d31e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.560104 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="168bfff7-248e-4717-beac-8f7986a5d31e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:57:41 crc kubenswrapper[4695]: E1126 13:57:41.560126 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="registry-server" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.560132 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="registry-server" Nov 26 13:57:41 crc kubenswrapper[4695]: E1126 13:57:41.560152 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="extract-content" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.560158 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="extract-content" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.560327 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3092a1a-3bfa-4df2-b10d-cc4e6b73389e" containerName="registry-server" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.560364 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="168bfff7-248e-4717-beac-8f7986a5d31e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.560974 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.564535 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.565196 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.565905 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.569991 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-76hkl"] Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.571114 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.609185 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.609398 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.609500 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pv7w\" (UniqueName: \"kubernetes.io/projected/08ef121f-97dc-4e9e-a466-438d25f2391e-kube-api-access-4pv7w\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.713690 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.713876 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.713919 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pv7w\" (UniqueName: \"kubernetes.io/projected/08ef121f-97dc-4e9e-a466-438d25f2391e-kube-api-access-4pv7w\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.720606 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.720799 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.733367 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pv7w\" (UniqueName: \"kubernetes.io/projected/08ef121f-97dc-4e9e-a466-438d25f2391e-kube-api-access-4pv7w\") pod \"ssh-known-hosts-edpm-deployment-76hkl\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:41 crc kubenswrapper[4695]: I1126 13:57:41.876734 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:42 crc kubenswrapper[4695]: I1126 13:57:42.537119 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-76hkl"] Nov 26 13:57:43 crc kubenswrapper[4695]: I1126 13:57:43.517005 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" event={"ID":"08ef121f-97dc-4e9e-a466-438d25f2391e","Type":"ContainerStarted","Data":"9cce992e753cac1d111ae20c4b9ec331ff83e1c3cac42fb65025012aabbad164"} Nov 26 13:57:43 crc kubenswrapper[4695]: I1126 13:57:43.517296 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" event={"ID":"08ef121f-97dc-4e9e-a466-438d25f2391e","Type":"ContainerStarted","Data":"251ac3d2dde11b18a8bfae32d2bd8d6db0640a47a298607051cb531148f1b58b"} Nov 26 13:57:43 crc kubenswrapper[4695]: I1126 13:57:43.534599 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" podStartSLOduration=1.875482706 podStartE2EDuration="2.534575594s" podCreationTimestamp="2025-11-26 13:57:41 +0000 UTC" firstStartedPulling="2025-11-26 13:57:42.5528965 +0000 UTC m=+2046.188721582" lastFinishedPulling="2025-11-26 13:57:43.211989388 +0000 UTC m=+2046.847814470" observedRunningTime="2025-11-26 13:57:43.529158573 +0000 UTC m=+2047.164983665" watchObservedRunningTime="2025-11-26 13:57:43.534575594 +0000 UTC m=+2047.170400686" Nov 26 13:57:50 crc kubenswrapper[4695]: I1126 13:57:50.578906 4695 generic.go:334] "Generic (PLEG): container finished" podID="08ef121f-97dc-4e9e-a466-438d25f2391e" containerID="9cce992e753cac1d111ae20c4b9ec331ff83e1c3cac42fb65025012aabbad164" exitCode=0 Nov 26 13:57:50 crc kubenswrapper[4695]: I1126 13:57:50.578985 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" event={"ID":"08ef121f-97dc-4e9e-a466-438d25f2391e","Type":"ContainerDied","Data":"9cce992e753cac1d111ae20c4b9ec331ff83e1c3cac42fb65025012aabbad164"} Nov 26 13:57:51 crc kubenswrapper[4695]: I1126 13:57:51.971373 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.116736 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-ssh-key-openstack-edpm-ipam\") pod \"08ef121f-97dc-4e9e-a466-438d25f2391e\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.117010 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pv7w\" (UniqueName: \"kubernetes.io/projected/08ef121f-97dc-4e9e-a466-438d25f2391e-kube-api-access-4pv7w\") pod \"08ef121f-97dc-4e9e-a466-438d25f2391e\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.117139 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-inventory-0\") pod \"08ef121f-97dc-4e9e-a466-438d25f2391e\" (UID: \"08ef121f-97dc-4e9e-a466-438d25f2391e\") " Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.133666 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ef121f-97dc-4e9e-a466-438d25f2391e-kube-api-access-4pv7w" (OuterVolumeSpecName: "kube-api-access-4pv7w") pod "08ef121f-97dc-4e9e-a466-438d25f2391e" (UID: "08ef121f-97dc-4e9e-a466-438d25f2391e"). InnerVolumeSpecName "kube-api-access-4pv7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.169582 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "08ef121f-97dc-4e9e-a466-438d25f2391e" (UID: "08ef121f-97dc-4e9e-a466-438d25f2391e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.190253 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "08ef121f-97dc-4e9e-a466-438d25f2391e" (UID: "08ef121f-97dc-4e9e-a466-438d25f2391e"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.219306 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pv7w\" (UniqueName: \"kubernetes.io/projected/08ef121f-97dc-4e9e-a466-438d25f2391e-kube-api-access-4pv7w\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.219337 4695 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.219369 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/08ef121f-97dc-4e9e-a466-438d25f2391e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.619807 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" event={"ID":"08ef121f-97dc-4e9e-a466-438d25f2391e","Type":"ContainerDied","Data":"251ac3d2dde11b18a8bfae32d2bd8d6db0640a47a298607051cb531148f1b58b"} Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.619857 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="251ac3d2dde11b18a8bfae32d2bd8d6db0640a47a298607051cb531148f1b58b" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.619880 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-76hkl" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.670770 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt"] Nov 26 13:57:52 crc kubenswrapper[4695]: E1126 13:57:52.671621 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ef121f-97dc-4e9e-a466-438d25f2391e" containerName="ssh-known-hosts-edpm-deployment" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.671646 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ef121f-97dc-4e9e-a466-438d25f2391e" containerName="ssh-known-hosts-edpm-deployment" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.671874 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ef121f-97dc-4e9e-a466-438d25f2391e" containerName="ssh-known-hosts-edpm-deployment" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.672743 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.676662 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.677281 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.678765 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt"] Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.679046 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.679717 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.833879 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.833956 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zkk\" (UniqueName: \"kubernetes.io/projected/d77b20d3-631b-481b-b480-226968d0b73c-kube-api-access-w4zkk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.834274 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.936398 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.936455 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4zkk\" (UniqueName: \"kubernetes.io/projected/d77b20d3-631b-481b-b480-226968d0b73c-kube-api-access-w4zkk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.936529 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.939779 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.939948 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:52 crc kubenswrapper[4695]: I1126 13:57:52.954341 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4zkk\" (UniqueName: \"kubernetes.io/projected/d77b20d3-631b-481b-b480-226968d0b73c-kube-api-access-w4zkk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-2twlt\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:53 crc kubenswrapper[4695]: I1126 13:57:53.035671 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:57:53 crc kubenswrapper[4695]: I1126 13:57:53.553767 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt"] Nov 26 13:57:53 crc kubenswrapper[4695]: I1126 13:57:53.628850 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" event={"ID":"d77b20d3-631b-481b-b480-226968d0b73c","Type":"ContainerStarted","Data":"e98026e2d231e220c951fa49d7857232defb4951e56f1f5e7e3e3439b093b316"} Nov 26 13:57:55 crc kubenswrapper[4695]: I1126 13:57:55.068988 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw4xf"] Nov 26 13:57:55 crc kubenswrapper[4695]: I1126 13:57:55.077187 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-kw4xf"] Nov 26 13:57:55 crc kubenswrapper[4695]: I1126 13:57:55.186151 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8916bc6c-1aae-4ee1-bc89-9afb8d98f34d" path="/var/lib/kubelet/pods/8916bc6c-1aae-4ee1-bc89-9afb8d98f34d/volumes" Nov 26 13:57:56 crc kubenswrapper[4695]: I1126 13:57:56.663014 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" event={"ID":"d77b20d3-631b-481b-b480-226968d0b73c","Type":"ContainerStarted","Data":"567c2d774bf413280e78ace953d19045efb5be4e882b87af8613fb91bd49e3b7"} Nov 26 13:57:56 crc kubenswrapper[4695]: I1126 13:57:56.680595 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" podStartSLOduration=2.307981549 podStartE2EDuration="4.68057531s" podCreationTimestamp="2025-11-26 13:57:52 +0000 UTC" firstStartedPulling="2025-11-26 13:57:53.56722234 +0000 UTC m=+2057.203047432" lastFinishedPulling="2025-11-26 13:57:55.939816111 +0000 UTC m=+2059.575641193" observedRunningTime="2025-11-26 13:57:56.678479615 +0000 UTC m=+2060.314304707" watchObservedRunningTime="2025-11-26 13:57:56.68057531 +0000 UTC m=+2060.316400392" Nov 26 13:58:04 crc kubenswrapper[4695]: I1126 13:58:04.742389 4695 generic.go:334] "Generic (PLEG): container finished" podID="d77b20d3-631b-481b-b480-226968d0b73c" containerID="567c2d774bf413280e78ace953d19045efb5be4e882b87af8613fb91bd49e3b7" exitCode=0 Nov 26 13:58:04 crc kubenswrapper[4695]: I1126 13:58:04.742452 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" event={"ID":"d77b20d3-631b-481b-b480-226968d0b73c","Type":"ContainerDied","Data":"567c2d774bf413280e78ace953d19045efb5be4e882b87af8613fb91bd49e3b7"} Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.157700 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.282129 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4zkk\" (UniqueName: \"kubernetes.io/projected/d77b20d3-631b-481b-b480-226968d0b73c-kube-api-access-w4zkk\") pod \"d77b20d3-631b-481b-b480-226968d0b73c\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.282549 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-inventory\") pod \"d77b20d3-631b-481b-b480-226968d0b73c\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.282728 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-ssh-key\") pod \"d77b20d3-631b-481b-b480-226968d0b73c\" (UID: \"d77b20d3-631b-481b-b480-226968d0b73c\") " Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.288045 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d77b20d3-631b-481b-b480-226968d0b73c-kube-api-access-w4zkk" (OuterVolumeSpecName: "kube-api-access-w4zkk") pod "d77b20d3-631b-481b-b480-226968d0b73c" (UID: "d77b20d3-631b-481b-b480-226968d0b73c"). InnerVolumeSpecName "kube-api-access-w4zkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.311542 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d77b20d3-631b-481b-b480-226968d0b73c" (UID: "d77b20d3-631b-481b-b480-226968d0b73c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.311564 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-inventory" (OuterVolumeSpecName: "inventory") pod "d77b20d3-631b-481b-b480-226968d0b73c" (UID: "d77b20d3-631b-481b-b480-226968d0b73c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.384890 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4zkk\" (UniqueName: \"kubernetes.io/projected/d77b20d3-631b-481b-b480-226968d0b73c-kube-api-access-w4zkk\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.384925 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.384934 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d77b20d3-631b-481b-b480-226968d0b73c-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.396420 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.396491 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.766149 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" event={"ID":"d77b20d3-631b-481b-b480-226968d0b73c","Type":"ContainerDied","Data":"e98026e2d231e220c951fa49d7857232defb4951e56f1f5e7e3e3439b093b316"} Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.766198 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e98026e2d231e220c951fa49d7857232defb4951e56f1f5e7e3e3439b093b316" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.766226 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-2twlt" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.841335 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh"] Nov 26 13:58:06 crc kubenswrapper[4695]: E1126 13:58:06.841848 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d77b20d3-631b-481b-b480-226968d0b73c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.841867 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d77b20d3-631b-481b-b480-226968d0b73c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.842038 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d77b20d3-631b-481b-b480-226968d0b73c" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.842665 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.844729 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.845129 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.845164 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.845387 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.854928 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh"] Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.995773 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwhl\" (UniqueName: \"kubernetes.io/projected/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-kube-api-access-4xwhl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.996767 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:06 crc kubenswrapper[4695]: I1126 13:58:06.996833 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.098877 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xwhl\" (UniqueName: \"kubernetes.io/projected/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-kube-api-access-4xwhl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.099300 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.099336 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.105201 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.114520 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.122477 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xwhl\" (UniqueName: \"kubernetes.io/projected/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-kube-api-access-4xwhl\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.162802 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.233295 4695 scope.go:117] "RemoveContainer" containerID="e8634326a70c4fc05cc4c181982d6b854ae2092519e8dc60bbd2057a8a2648fb" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.311062 4695 scope.go:117] "RemoveContainer" containerID="b3d4c2aa6ff6b4a554235f85b028ada38dc383de9a97066154c4046607ac0034" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.364567 4695 scope.go:117] "RemoveContainer" containerID="4ddbf728ae9b9e1a05878ff58148c9394a762fdd3265cf2de5e9c590dbad105c" Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.738754 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh"] Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.742700 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 13:58:07 crc kubenswrapper[4695]: I1126 13:58:07.775767 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" event={"ID":"fbe77fc2-bcee-446d-a02c-5a992ab5dcae","Type":"ContainerStarted","Data":"19b1f63994fae3fe3392d9b2399dd1287b50c50e9f921be5fffbe3f027b90a82"} Nov 26 13:58:08 crc kubenswrapper[4695]: I1126 13:58:08.787962 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" event={"ID":"fbe77fc2-bcee-446d-a02c-5a992ab5dcae","Type":"ContainerStarted","Data":"4ccd2f8b81e9a6f60cbf2b54f038bc79256519be4df0788f9fbb24a965687b48"} Nov 26 13:58:08 crc kubenswrapper[4695]: I1126 13:58:08.808762 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" podStartSLOduration=2.344930944 podStartE2EDuration="2.808743555s" podCreationTimestamp="2025-11-26 13:58:06 +0000 UTC" firstStartedPulling="2025-11-26 13:58:07.742448007 +0000 UTC m=+2071.378273089" lastFinishedPulling="2025-11-26 13:58:08.206260618 +0000 UTC m=+2071.842085700" observedRunningTime="2025-11-26 13:58:08.804678938 +0000 UTC m=+2072.440504020" watchObservedRunningTime="2025-11-26 13:58:08.808743555 +0000 UTC m=+2072.444568637" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.659191 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p82ng"] Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.662015 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.670007 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p82ng"] Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.725406 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-utilities\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.725512 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-catalog-content\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.725865 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn4z5\" (UniqueName: \"kubernetes.io/projected/69574f72-0e6e-42a8-bb09-15495f6f6601-kube-api-access-vn4z5\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.827276 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-utilities\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.827737 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-catalog-content\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.827890 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn4z5\" (UniqueName: \"kubernetes.io/projected/69574f72-0e6e-42a8-bb09-15495f6f6601-kube-api-access-vn4z5\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.828086 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-utilities\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.828331 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-catalog-content\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.861213 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn4z5\" (UniqueName: \"kubernetes.io/projected/69574f72-0e6e-42a8-bb09-15495f6f6601-kube-api-access-vn4z5\") pod \"certified-operators-p82ng\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.873794 4695 generic.go:334] "Generic (PLEG): container finished" podID="fbe77fc2-bcee-446d-a02c-5a992ab5dcae" containerID="4ccd2f8b81e9a6f60cbf2b54f038bc79256519be4df0788f9fbb24a965687b48" exitCode=0 Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.873866 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" event={"ID":"fbe77fc2-bcee-446d-a02c-5a992ab5dcae","Type":"ContainerDied","Data":"4ccd2f8b81e9a6f60cbf2b54f038bc79256519be4df0788f9fbb24a965687b48"} Nov 26 13:58:18 crc kubenswrapper[4695]: I1126 13:58:18.992954 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:19 crc kubenswrapper[4695]: I1126 13:58:19.528293 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p82ng"] Nov 26 13:58:19 crc kubenswrapper[4695]: I1126 13:58:19.881991 4695 generic.go:334] "Generic (PLEG): container finished" podID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerID="8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53" exitCode=0 Nov 26 13:58:19 crc kubenswrapper[4695]: I1126 13:58:19.882951 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p82ng" event={"ID":"69574f72-0e6e-42a8-bb09-15495f6f6601","Type":"ContainerDied","Data":"8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53"} Nov 26 13:58:19 crc kubenswrapper[4695]: I1126 13:58:19.882975 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p82ng" event={"ID":"69574f72-0e6e-42a8-bb09-15495f6f6601","Type":"ContainerStarted","Data":"db8c8d51bce209f6a754bf3be06ea0870c7aca0abb82078b5f1b657e4cad7d57"} Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.281572 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.360505 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-inventory\") pod \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.360593 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-ssh-key\") pod \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.360667 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xwhl\" (UniqueName: \"kubernetes.io/projected/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-kube-api-access-4xwhl\") pod \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\" (UID: \"fbe77fc2-bcee-446d-a02c-5a992ab5dcae\") " Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.376315 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-kube-api-access-4xwhl" (OuterVolumeSpecName: "kube-api-access-4xwhl") pod "fbe77fc2-bcee-446d-a02c-5a992ab5dcae" (UID: "fbe77fc2-bcee-446d-a02c-5a992ab5dcae"). InnerVolumeSpecName "kube-api-access-4xwhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.388471 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-inventory" (OuterVolumeSpecName: "inventory") pod "fbe77fc2-bcee-446d-a02c-5a992ab5dcae" (UID: "fbe77fc2-bcee-446d-a02c-5a992ab5dcae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.394780 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fbe77fc2-bcee-446d-a02c-5a992ab5dcae" (UID: "fbe77fc2-bcee-446d-a02c-5a992ab5dcae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.464117 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.464395 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.464492 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4xwhl\" (UniqueName: \"kubernetes.io/projected/fbe77fc2-bcee-446d-a02c-5a992ab5dcae-kube-api-access-4xwhl\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.895200 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" event={"ID":"fbe77fc2-bcee-446d-a02c-5a992ab5dcae","Type":"ContainerDied","Data":"19b1f63994fae3fe3392d9b2399dd1287b50c50e9f921be5fffbe3f027b90a82"} Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.895257 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b1f63994fae3fe3392d9b2399dd1287b50c50e9f921be5fffbe3f027b90a82" Nov 26 13:58:20 crc kubenswrapper[4695]: I1126 13:58:20.895269 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.016071 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc"] Nov 26 13:58:21 crc kubenswrapper[4695]: E1126 13:58:21.016957 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbe77fc2-bcee-446d-a02c-5a992ab5dcae" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.016978 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbe77fc2-bcee-446d-a02c-5a992ab5dcae" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.017219 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbe77fc2-bcee-446d-a02c-5a992ab5dcae" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.018010 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.022321 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.024990 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.025462 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.025653 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.025850 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.025973 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.026124 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.026758 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.027048 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc"] Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.183090 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.183492 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.183781 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.183982 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.184233 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.184515 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.184914 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.185087 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.185165 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.185290 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.185482 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.185556 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f52g2\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-kube-api-access-f52g2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.185638 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.185697 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.287809 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.287881 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.287985 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288017 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288079 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288121 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288161 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288191 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288256 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288285 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288309 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288367 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288424 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.288456 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f52g2\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-kube-api-access-f52g2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.293857 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.294204 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.294225 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.294671 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.297332 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.298021 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.298400 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.298964 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.298970 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.299214 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.299238 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.299719 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.303257 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.322396 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f52g2\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-kube-api-access-f52g2\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2jszc\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.343057 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.939832 4695 generic.go:334] "Generic (PLEG): container finished" podID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerID="8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9" exitCode=0 Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.940339 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p82ng" event={"ID":"69574f72-0e6e-42a8-bb09-15495f6f6601","Type":"ContainerDied","Data":"8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9"} Nov 26 13:58:21 crc kubenswrapper[4695]: I1126 13:58:21.971821 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc"] Nov 26 13:58:22 crc kubenswrapper[4695]: I1126 13:58:22.950316 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" event={"ID":"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f","Type":"ContainerStarted","Data":"a5f08f6d2a8998b01666024b22dab8ee16f6bede08258c6869eda7d425dd9aa8"} Nov 26 13:58:22 crc kubenswrapper[4695]: I1126 13:58:22.950627 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" event={"ID":"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f","Type":"ContainerStarted","Data":"af479997a77dea38fd4a8c1f7d8fe787e8b8a8111a61213d3231e882a5830942"} Nov 26 13:58:22 crc kubenswrapper[4695]: I1126 13:58:22.952530 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p82ng" event={"ID":"69574f72-0e6e-42a8-bb09-15495f6f6601","Type":"ContainerStarted","Data":"8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841"} Nov 26 13:58:22 crc kubenswrapper[4695]: I1126 13:58:22.977258 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" podStartSLOduration=2.367000261 podStartE2EDuration="2.977237194s" podCreationTimestamp="2025-11-26 13:58:20 +0000 UTC" firstStartedPulling="2025-11-26 13:58:21.988526231 +0000 UTC m=+2085.624351313" lastFinishedPulling="2025-11-26 13:58:22.598763164 +0000 UTC m=+2086.234588246" observedRunningTime="2025-11-26 13:58:22.969758041 +0000 UTC m=+2086.605583123" watchObservedRunningTime="2025-11-26 13:58:22.977237194 +0000 UTC m=+2086.613062276" Nov 26 13:58:22 crc kubenswrapper[4695]: I1126 13:58:22.996853 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p82ng" podStartSLOduration=2.296154949 podStartE2EDuration="4.996827475s" podCreationTimestamp="2025-11-26 13:58:18 +0000 UTC" firstStartedPulling="2025-11-26 13:58:19.883490643 +0000 UTC m=+2083.519315725" lastFinishedPulling="2025-11-26 13:58:22.584163169 +0000 UTC m=+2086.219988251" observedRunningTime="2025-11-26 13:58:22.990091906 +0000 UTC m=+2086.625917018" watchObservedRunningTime="2025-11-26 13:58:22.996827475 +0000 UTC m=+2086.632652547" Nov 26 13:58:28 crc kubenswrapper[4695]: I1126 13:58:28.993095 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:28 crc kubenswrapper[4695]: I1126 13:58:28.993779 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:29 crc kubenswrapper[4695]: I1126 13:58:29.049854 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:29 crc kubenswrapper[4695]: I1126 13:58:29.104568 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:29 crc kubenswrapper[4695]: I1126 13:58:29.286813 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p82ng"] Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.018539 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p82ng" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="registry-server" containerID="cri-o://8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841" gracePeriod=2 Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.497583 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.607745 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn4z5\" (UniqueName: \"kubernetes.io/projected/69574f72-0e6e-42a8-bb09-15495f6f6601-kube-api-access-vn4z5\") pod \"69574f72-0e6e-42a8-bb09-15495f6f6601\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.607831 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-catalog-content\") pod \"69574f72-0e6e-42a8-bb09-15495f6f6601\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.608080 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-utilities\") pod \"69574f72-0e6e-42a8-bb09-15495f6f6601\" (UID: \"69574f72-0e6e-42a8-bb09-15495f6f6601\") " Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.614562 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-utilities" (OuterVolumeSpecName: "utilities") pod "69574f72-0e6e-42a8-bb09-15495f6f6601" (UID: "69574f72-0e6e-42a8-bb09-15495f6f6601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.625565 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69574f72-0e6e-42a8-bb09-15495f6f6601-kube-api-access-vn4z5" (OuterVolumeSpecName: "kube-api-access-vn4z5") pod "69574f72-0e6e-42a8-bb09-15495f6f6601" (UID: "69574f72-0e6e-42a8-bb09-15495f6f6601"). InnerVolumeSpecName "kube-api-access-vn4z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.671542 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69574f72-0e6e-42a8-bb09-15495f6f6601" (UID: "69574f72-0e6e-42a8-bb09-15495f6f6601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.709915 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.709955 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn4z5\" (UniqueName: \"kubernetes.io/projected/69574f72-0e6e-42a8-bb09-15495f6f6601-kube-api-access-vn4z5\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:31 crc kubenswrapper[4695]: I1126 13:58:31.709969 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69574f72-0e6e-42a8-bb09-15495f6f6601-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.032639 4695 generic.go:334] "Generic (PLEG): container finished" podID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerID="8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841" exitCode=0 Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.032702 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p82ng" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.032700 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p82ng" event={"ID":"69574f72-0e6e-42a8-bb09-15495f6f6601","Type":"ContainerDied","Data":"8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841"} Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.032920 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p82ng" event={"ID":"69574f72-0e6e-42a8-bb09-15495f6f6601","Type":"ContainerDied","Data":"db8c8d51bce209f6a754bf3be06ea0870c7aca0abb82078b5f1b657e4cad7d57"} Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.032956 4695 scope.go:117] "RemoveContainer" containerID="8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.058795 4695 scope.go:117] "RemoveContainer" containerID="8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.083791 4695 scope.go:117] "RemoveContainer" containerID="8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.084332 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p82ng"] Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.092170 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p82ng"] Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.127844 4695 scope.go:117] "RemoveContainer" containerID="8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841" Nov 26 13:58:32 crc kubenswrapper[4695]: E1126 13:58:32.130802 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841\": container with ID starting with 8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841 not found: ID does not exist" containerID="8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.130847 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841"} err="failed to get container status \"8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841\": rpc error: code = NotFound desc = could not find container \"8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841\": container with ID starting with 8f7916b2dbae64a7ad9f1b8d81678e94dd66db946087d99a486f40962ce18841 not found: ID does not exist" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.130879 4695 scope.go:117] "RemoveContainer" containerID="8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9" Nov 26 13:58:32 crc kubenswrapper[4695]: E1126 13:58:32.131273 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9\": container with ID starting with 8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9 not found: ID does not exist" containerID="8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.131291 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9"} err="failed to get container status \"8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9\": rpc error: code = NotFound desc = could not find container \"8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9\": container with ID starting with 8a90545b0ef901cdd7b8ef1ae6fa041087ad7af3bb545f64241f064109d50df9 not found: ID does not exist" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.131303 4695 scope.go:117] "RemoveContainer" containerID="8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53" Nov 26 13:58:32 crc kubenswrapper[4695]: E1126 13:58:32.131552 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53\": container with ID starting with 8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53 not found: ID does not exist" containerID="8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53" Nov 26 13:58:32 crc kubenswrapper[4695]: I1126 13:58:32.131577 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53"} err="failed to get container status \"8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53\": rpc error: code = NotFound desc = could not find container \"8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53\": container with ID starting with 8b9340b9e340a27a979359a56fa20c0b0b847c4468b69d3f72c4be5956032b53 not found: ID does not exist" Nov 26 13:58:33 crc kubenswrapper[4695]: I1126 13:58:33.173014 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" path="/var/lib/kubelet/pods/69574f72-0e6e-42a8-bb09-15495f6f6601/volumes" Nov 26 13:58:36 crc kubenswrapper[4695]: I1126 13:58:36.397153 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 13:58:36 crc kubenswrapper[4695]: I1126 13:58:36.397602 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 13:58:36 crc kubenswrapper[4695]: I1126 13:58:36.397683 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 13:58:36 crc kubenswrapper[4695]: I1126 13:58:36.398597 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9054ffcb43e1cd20adf0a64a79d1ce9e9ce7c6483384269e4ffce8dac0186885"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 13:58:36 crc kubenswrapper[4695]: I1126 13:58:36.398734 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://9054ffcb43e1cd20adf0a64a79d1ce9e9ce7c6483384269e4ffce8dac0186885" gracePeriod=600 Nov 26 13:58:37 crc kubenswrapper[4695]: I1126 13:58:37.081679 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="9054ffcb43e1cd20adf0a64a79d1ce9e9ce7c6483384269e4ffce8dac0186885" exitCode=0 Nov 26 13:58:37 crc kubenswrapper[4695]: I1126 13:58:37.081746 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"9054ffcb43e1cd20adf0a64a79d1ce9e9ce7c6483384269e4ffce8dac0186885"} Nov 26 13:58:37 crc kubenswrapper[4695]: I1126 13:58:37.081982 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993"} Nov 26 13:58:37 crc kubenswrapper[4695]: I1126 13:58:37.082003 4695 scope.go:117] "RemoveContainer" containerID="656f19695a0262e2327d24e1bc640eb35d4cc894869206ffa000044e24a5d306" Nov 26 13:59:00 crc kubenswrapper[4695]: I1126 13:59:00.283546 4695 generic.go:334] "Generic (PLEG): container finished" podID="509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" containerID="a5f08f6d2a8998b01666024b22dab8ee16f6bede08258c6869eda7d425dd9aa8" exitCode=0 Nov 26 13:59:00 crc kubenswrapper[4695]: I1126 13:59:00.283636 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" event={"ID":"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f","Type":"ContainerDied","Data":"a5f08f6d2a8998b01666024b22dab8ee16f6bede08258c6869eda7d425dd9aa8"} Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.730709 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.808927 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809013 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ovn-combined-ca-bundle\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809071 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-bootstrap-combined-ca-bundle\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809089 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809117 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-repo-setup-combined-ca-bundle\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809141 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-nova-combined-ca-bundle\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809167 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-neutron-metadata-combined-ca-bundle\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809196 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-ovn-default-certs-0\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809236 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809257 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-telemetry-combined-ca-bundle\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809272 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f52g2\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-kube-api-access-f52g2\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809290 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ssh-key\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809359 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-libvirt-combined-ca-bundle\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.809376 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-inventory\") pod \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\" (UID: \"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f\") " Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.815012 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.815755 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.816178 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.816586 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.816810 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.816893 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.818754 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.818799 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.819157 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-kube-api-access-f52g2" (OuterVolumeSpecName: "kube-api-access-f52g2") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "kube-api-access-f52g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.819221 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.820627 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.821224 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.842215 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.842391 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-inventory" (OuterVolumeSpecName: "inventory") pod "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" (UID: "509c6c88-4720-4dcc-b9fc-e50ef40c4a6f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911785 4695 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911817 4695 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911829 4695 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911840 4695 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911850 4695 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911859 4695 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911868 4695 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911877 4695 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911886 4695 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911904 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f52g2\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-kube-api-access-f52g2\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911923 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911936 4695 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911946 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:01 crc kubenswrapper[4695]: I1126 13:59:01.911956 4695 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/509c6c88-4720-4dcc-b9fc-e50ef40c4a6f-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.304408 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" event={"ID":"509c6c88-4720-4dcc-b9fc-e50ef40c4a6f","Type":"ContainerDied","Data":"af479997a77dea38fd4a8c1f7d8fe787e8b8a8111a61213d3231e882a5830942"} Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.304457 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af479997a77dea38fd4a8c1f7d8fe787e8b8a8111a61213d3231e882a5830942" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.304520 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2jszc" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.398967 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw"] Nov 26 13:59:02 crc kubenswrapper[4695]: E1126 13:59:02.399520 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="extract-content" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.399544 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="extract-content" Nov 26 13:59:02 crc kubenswrapper[4695]: E1126 13:59:02.399568 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="registry-server" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.399578 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="registry-server" Nov 26 13:59:02 crc kubenswrapper[4695]: E1126 13:59:02.399598 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.399607 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 26 13:59:02 crc kubenswrapper[4695]: E1126 13:59:02.399636 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="extract-utilities" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.399644 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="extract-utilities" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.399922 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="69574f72-0e6e-42a8-bb09-15495f6f6601" containerName="registry-server" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.399955 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="509c6c88-4720-4dcc-b9fc-e50ef40c4a6f" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.400764 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.402641 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.402750 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.402902 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.402992 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.403155 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.410702 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw"] Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.527114 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.527558 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.527601 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72410dcc-406c-43d5-bc58-320471e9df04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.527866 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpdcv\" (UniqueName: \"kubernetes.io/projected/72410dcc-406c-43d5-bc58-320471e9df04-kube-api-access-lpdcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.527948 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.630232 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpdcv\" (UniqueName: \"kubernetes.io/projected/72410dcc-406c-43d5-bc58-320471e9df04-kube-api-access-lpdcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.630297 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.630387 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.630470 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.630510 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72410dcc-406c-43d5-bc58-320471e9df04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.631477 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72410dcc-406c-43d5-bc58-320471e9df04-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.634040 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.634325 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.634493 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.654425 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpdcv\" (UniqueName: \"kubernetes.io/projected/72410dcc-406c-43d5-bc58-320471e9df04-kube-api-access-lpdcv\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-l77jw\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:02 crc kubenswrapper[4695]: I1126 13:59:02.725517 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 13:59:03 crc kubenswrapper[4695]: I1126 13:59:03.240321 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw"] Nov 26 13:59:03 crc kubenswrapper[4695]: I1126 13:59:03.313864 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" event={"ID":"72410dcc-406c-43d5-bc58-320471e9df04","Type":"ContainerStarted","Data":"1f100cc6876121399e755f21054eb66826229c006b89f05e1c4ac8c76e8ee569"} Nov 26 13:59:04 crc kubenswrapper[4695]: I1126 13:59:04.323787 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" event={"ID":"72410dcc-406c-43d5-bc58-320471e9df04","Type":"ContainerStarted","Data":"be4c74d8eff6a9811a7c01b7ffff96828edb37941eefc207f645da6a98bb6f8a"} Nov 26 13:59:04 crc kubenswrapper[4695]: I1126 13:59:04.341266 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" podStartSLOduration=1.81310532 podStartE2EDuration="2.341249851s" podCreationTimestamp="2025-11-26 13:59:02 +0000 UTC" firstStartedPulling="2025-11-26 13:59:03.252186027 +0000 UTC m=+2126.888011109" lastFinishedPulling="2025-11-26 13:59:03.780330568 +0000 UTC m=+2127.416155640" observedRunningTime="2025-11-26 13:59:04.336858544 +0000 UTC m=+2127.972683646" watchObservedRunningTime="2025-11-26 13:59:04.341249851 +0000 UTC m=+2127.977074933" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.151977 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt"] Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.154001 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.155895 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.156410 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.164023 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt"] Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.344376 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0a713d8-fbe1-4101-92da-06d39005073a-config-volume\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.344548 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0a713d8-fbe1-4101-92da-06d39005073a-secret-volume\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.344635 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvj8\" (UniqueName: \"kubernetes.io/projected/f0a713d8-fbe1-4101-92da-06d39005073a-kube-api-access-bdvj8\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.445988 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0a713d8-fbe1-4101-92da-06d39005073a-secret-volume\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.446059 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvj8\" (UniqueName: \"kubernetes.io/projected/f0a713d8-fbe1-4101-92da-06d39005073a-kube-api-access-bdvj8\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.446145 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0a713d8-fbe1-4101-92da-06d39005073a-config-volume\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.447321 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0a713d8-fbe1-4101-92da-06d39005073a-config-volume\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.460895 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0a713d8-fbe1-4101-92da-06d39005073a-secret-volume\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.463527 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvj8\" (UniqueName: \"kubernetes.io/projected/f0a713d8-fbe1-4101-92da-06d39005073a-kube-api-access-bdvj8\") pod \"collect-profiles-29402760-k6vlt\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.481154 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:00 crc kubenswrapper[4695]: I1126 14:00:00.904292 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt"] Nov 26 14:00:01 crc kubenswrapper[4695]: I1126 14:00:01.858033 4695 generic.go:334] "Generic (PLEG): container finished" podID="f0a713d8-fbe1-4101-92da-06d39005073a" containerID="9f5f91fb84356d3cafb7558c6c8e176ac331b8e66df8a11de6de4da9cecd0221" exitCode=0 Nov 26 14:00:01 crc kubenswrapper[4695]: I1126 14:00:01.858122 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" event={"ID":"f0a713d8-fbe1-4101-92da-06d39005073a","Type":"ContainerDied","Data":"9f5f91fb84356d3cafb7558c6c8e176ac331b8e66df8a11de6de4da9cecd0221"} Nov 26 14:00:01 crc kubenswrapper[4695]: I1126 14:00:01.858393 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" event={"ID":"f0a713d8-fbe1-4101-92da-06d39005073a","Type":"ContainerStarted","Data":"cc757667087330f558b818ecbacd42e4bcd48e0a8f56fd8f3071331592f96fb5"} Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.198563 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.309867 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0a713d8-fbe1-4101-92da-06d39005073a-secret-volume\") pod \"f0a713d8-fbe1-4101-92da-06d39005073a\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.310010 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0a713d8-fbe1-4101-92da-06d39005073a-config-volume\") pod \"f0a713d8-fbe1-4101-92da-06d39005073a\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.310251 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdvj8\" (UniqueName: \"kubernetes.io/projected/f0a713d8-fbe1-4101-92da-06d39005073a-kube-api-access-bdvj8\") pod \"f0a713d8-fbe1-4101-92da-06d39005073a\" (UID: \"f0a713d8-fbe1-4101-92da-06d39005073a\") " Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.310961 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0a713d8-fbe1-4101-92da-06d39005073a-config-volume" (OuterVolumeSpecName: "config-volume") pod "f0a713d8-fbe1-4101-92da-06d39005073a" (UID: "f0a713d8-fbe1-4101-92da-06d39005073a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.317172 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0a713d8-fbe1-4101-92da-06d39005073a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f0a713d8-fbe1-4101-92da-06d39005073a" (UID: "f0a713d8-fbe1-4101-92da-06d39005073a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.317266 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a713d8-fbe1-4101-92da-06d39005073a-kube-api-access-bdvj8" (OuterVolumeSpecName: "kube-api-access-bdvj8") pod "f0a713d8-fbe1-4101-92da-06d39005073a" (UID: "f0a713d8-fbe1-4101-92da-06d39005073a"). InnerVolumeSpecName "kube-api-access-bdvj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.412616 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdvj8\" (UniqueName: \"kubernetes.io/projected/f0a713d8-fbe1-4101-92da-06d39005073a-kube-api-access-bdvj8\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.412927 4695 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f0a713d8-fbe1-4101-92da-06d39005073a-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.412998 4695 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0a713d8-fbe1-4101-92da-06d39005073a-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.878241 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" event={"ID":"f0a713d8-fbe1-4101-92da-06d39005073a","Type":"ContainerDied","Data":"cc757667087330f558b818ecbacd42e4bcd48e0a8f56fd8f3071331592f96fb5"} Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.878285 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc757667087330f558b818ecbacd42e4bcd48e0a8f56fd8f3071331592f96fb5" Nov 26 14:00:03 crc kubenswrapper[4695]: I1126 14:00:03.878391 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402760-k6vlt" Nov 26 14:00:04 crc kubenswrapper[4695]: I1126 14:00:04.274668 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8"] Nov 26 14:00:04 crc kubenswrapper[4695]: I1126 14:00:04.284475 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402715-4w2c8"] Nov 26 14:00:04 crc kubenswrapper[4695]: I1126 14:00:04.903550 4695 generic.go:334] "Generic (PLEG): container finished" podID="72410dcc-406c-43d5-bc58-320471e9df04" containerID="be4c74d8eff6a9811a7c01b7ffff96828edb37941eefc207f645da6a98bb6f8a" exitCode=0 Nov 26 14:00:04 crc kubenswrapper[4695]: I1126 14:00:04.903630 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" event={"ID":"72410dcc-406c-43d5-bc58-320471e9df04","Type":"ContainerDied","Data":"be4c74d8eff6a9811a7c01b7ffff96828edb37941eefc207f645da6a98bb6f8a"} Nov 26 14:00:05 crc kubenswrapper[4695]: I1126 14:00:05.173939 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9a9b369-2369-4fe3-9568-ada564d1c2a6" path="/var/lib/kubelet/pods/b9a9b369-2369-4fe3-9568-ada564d1c2a6/volumes" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.360295 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.468038 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-inventory\") pod \"72410dcc-406c-43d5-bc58-320471e9df04\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.468173 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpdcv\" (UniqueName: \"kubernetes.io/projected/72410dcc-406c-43d5-bc58-320471e9df04-kube-api-access-lpdcv\") pod \"72410dcc-406c-43d5-bc58-320471e9df04\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.468241 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ssh-key\") pod \"72410dcc-406c-43d5-bc58-320471e9df04\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.468292 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72410dcc-406c-43d5-bc58-320471e9df04-ovncontroller-config-0\") pod \"72410dcc-406c-43d5-bc58-320471e9df04\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.468515 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ovn-combined-ca-bundle\") pod \"72410dcc-406c-43d5-bc58-320471e9df04\" (UID: \"72410dcc-406c-43d5-bc58-320471e9df04\") " Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.476705 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "72410dcc-406c-43d5-bc58-320471e9df04" (UID: "72410dcc-406c-43d5-bc58-320471e9df04"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.477137 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72410dcc-406c-43d5-bc58-320471e9df04-kube-api-access-lpdcv" (OuterVolumeSpecName: "kube-api-access-lpdcv") pod "72410dcc-406c-43d5-bc58-320471e9df04" (UID: "72410dcc-406c-43d5-bc58-320471e9df04"). InnerVolumeSpecName "kube-api-access-lpdcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.498462 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "72410dcc-406c-43d5-bc58-320471e9df04" (UID: "72410dcc-406c-43d5-bc58-320471e9df04"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.498703 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-inventory" (OuterVolumeSpecName: "inventory") pod "72410dcc-406c-43d5-bc58-320471e9df04" (UID: "72410dcc-406c-43d5-bc58-320471e9df04"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.498707 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72410dcc-406c-43d5-bc58-320471e9df04-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "72410dcc-406c-43d5-bc58-320471e9df04" (UID: "72410dcc-406c-43d5-bc58-320471e9df04"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.570389 4695 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.570752 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.570765 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpdcv\" (UniqueName: \"kubernetes.io/projected/72410dcc-406c-43d5-bc58-320471e9df04-kube-api-access-lpdcv\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.570778 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/72410dcc-406c-43d5-bc58-320471e9df04-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.570797 4695 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/72410dcc-406c-43d5-bc58-320471e9df04-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.922590 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" event={"ID":"72410dcc-406c-43d5-bc58-320471e9df04","Type":"ContainerDied","Data":"1f100cc6876121399e755f21054eb66826229c006b89f05e1c4ac8c76e8ee569"} Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.922628 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f100cc6876121399e755f21054eb66826229c006b89f05e1c4ac8c76e8ee569" Nov 26 14:00:06 crc kubenswrapper[4695]: I1126 14:00:06.922632 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-l77jw" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.003080 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm"] Nov 26 14:00:07 crc kubenswrapper[4695]: E1126 14:00:07.003501 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a713d8-fbe1-4101-92da-06d39005073a" containerName="collect-profiles" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.003518 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a713d8-fbe1-4101-92da-06d39005073a" containerName="collect-profiles" Nov 26 14:00:07 crc kubenswrapper[4695]: E1126 14:00:07.003540 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72410dcc-406c-43d5-bc58-320471e9df04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.003547 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="72410dcc-406c-43d5-bc58-320471e9df04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.003726 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a713d8-fbe1-4101-92da-06d39005073a" containerName="collect-profiles" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.003746 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="72410dcc-406c-43d5-bc58-320471e9df04" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.004382 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.007833 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.007870 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.008166 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.008266 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.008439 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.008505 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.023874 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm"] Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.080305 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmwt\" (UniqueName: \"kubernetes.io/projected/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-kube-api-access-brmwt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.080673 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.080903 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.081083 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.081309 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.081532 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.183221 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.183279 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.183321 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brmwt\" (UniqueName: \"kubernetes.io/projected/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-kube-api-access-brmwt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.183396 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.183478 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.183505 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.188152 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.188507 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.188828 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.189105 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.190709 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.202123 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmwt\" (UniqueName: \"kubernetes.io/projected/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-kube-api-access-brmwt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.322312 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.552112 4695 scope.go:117] "RemoveContainer" containerID="b85cac8689fd2f9e52b918b29d8980e90364b106fb738cf729f66c8eecf4f3fa" Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.839223 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm"] Nov 26 14:00:07 crc kubenswrapper[4695]: I1126 14:00:07.932669 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" event={"ID":"7ac47dbe-143a-49da-80b2-e60fc44ebaf4","Type":"ContainerStarted","Data":"a69cb83be8486c45b62bfaa97a28c9602831704243f41d7a4c8454b73b44dbe3"} Nov 26 14:00:09 crc kubenswrapper[4695]: I1126 14:00:09.950117 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" event={"ID":"7ac47dbe-143a-49da-80b2-e60fc44ebaf4","Type":"ContainerStarted","Data":"00c4be1a301a586cf008ee290c0f363952094b6ede48dc3121183dd023e68ef3"} Nov 26 14:00:09 crc kubenswrapper[4695]: I1126 14:00:09.969917 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" podStartSLOduration=2.74525605 podStartE2EDuration="3.969898469s" podCreationTimestamp="2025-11-26 14:00:06 +0000 UTC" firstStartedPulling="2025-11-26 14:00:07.846800766 +0000 UTC m=+2191.482625848" lastFinishedPulling="2025-11-26 14:00:09.071443185 +0000 UTC m=+2192.707268267" observedRunningTime="2025-11-26 14:00:09.966863273 +0000 UTC m=+2193.602688355" watchObservedRunningTime="2025-11-26 14:00:09.969898469 +0000 UTC m=+2193.605723551" Nov 26 14:00:36 crc kubenswrapper[4695]: I1126 14:00:36.396700 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:00:36 crc kubenswrapper[4695]: I1126 14:00:36.397267 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:00:54 crc kubenswrapper[4695]: I1126 14:00:54.365186 4695 generic.go:334] "Generic (PLEG): container finished" podID="7ac47dbe-143a-49da-80b2-e60fc44ebaf4" containerID="00c4be1a301a586cf008ee290c0f363952094b6ede48dc3121183dd023e68ef3" exitCode=0 Nov 26 14:00:54 crc kubenswrapper[4695]: I1126 14:00:54.365263 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" event={"ID":"7ac47dbe-143a-49da-80b2-e60fc44ebaf4","Type":"ContainerDied","Data":"00c4be1a301a586cf008ee290c0f363952094b6ede48dc3121183dd023e68ef3"} Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.770634 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.820463 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-metadata-combined-ca-bundle\") pod \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.820537 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.820599 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brmwt\" (UniqueName: \"kubernetes.io/projected/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-kube-api-access-brmwt\") pod \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.820623 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-inventory\") pod \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.820657 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-nova-metadata-neutron-config-0\") pod \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.820718 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-ssh-key\") pod \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\" (UID: \"7ac47dbe-143a-49da-80b2-e60fc44ebaf4\") " Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.828636 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-kube-api-access-brmwt" (OuterVolumeSpecName: "kube-api-access-brmwt") pod "7ac47dbe-143a-49da-80b2-e60fc44ebaf4" (UID: "7ac47dbe-143a-49da-80b2-e60fc44ebaf4"). InnerVolumeSpecName "kube-api-access-brmwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.829262 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7ac47dbe-143a-49da-80b2-e60fc44ebaf4" (UID: "7ac47dbe-143a-49da-80b2-e60fc44ebaf4"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.854126 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7ac47dbe-143a-49da-80b2-e60fc44ebaf4" (UID: "7ac47dbe-143a-49da-80b2-e60fc44ebaf4"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.858807 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-inventory" (OuterVolumeSpecName: "inventory") pod "7ac47dbe-143a-49da-80b2-e60fc44ebaf4" (UID: "7ac47dbe-143a-49da-80b2-e60fc44ebaf4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.857171 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7ac47dbe-143a-49da-80b2-e60fc44ebaf4" (UID: "7ac47dbe-143a-49da-80b2-e60fc44ebaf4"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.863626 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7ac47dbe-143a-49da-80b2-e60fc44ebaf4" (UID: "7ac47dbe-143a-49da-80b2-e60fc44ebaf4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.922861 4695 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.923151 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.923245 4695 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.923459 4695 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.923572 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brmwt\" (UniqueName: \"kubernetes.io/projected/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-kube-api-access-brmwt\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:55 crc kubenswrapper[4695]: I1126 14:00:55.923652 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7ac47dbe-143a-49da-80b2-e60fc44ebaf4-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.392245 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" event={"ID":"7ac47dbe-143a-49da-80b2-e60fc44ebaf4","Type":"ContainerDied","Data":"a69cb83be8486c45b62bfaa97a28c9602831704243f41d7a4c8454b73b44dbe3"} Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.392320 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a69cb83be8486c45b62bfaa97a28c9602831704243f41d7a4c8454b73b44dbe3" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.392444 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.487479 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7"] Nov 26 14:00:56 crc kubenswrapper[4695]: E1126 14:00:56.487949 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac47dbe-143a-49da-80b2-e60fc44ebaf4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.487967 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac47dbe-143a-49da-80b2-e60fc44ebaf4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.488147 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac47dbe-143a-49da-80b2-e60fc44ebaf4" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.488823 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.492766 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.492962 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.492984 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.493187 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.496044 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.498132 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7"] Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.536822 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.537184 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf7mm\" (UniqueName: \"kubernetes.io/projected/76b33613-bb4c-4e62-9574-4372603edc01-kube-api-access-sf7mm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.537307 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.537456 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.537591 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.639928 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.640043 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.640098 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf7mm\" (UniqueName: \"kubernetes.io/projected/76b33613-bb4c-4e62-9574-4372603edc01-kube-api-access-sf7mm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.640152 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.640206 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.645054 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.645828 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.646995 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.648826 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.664112 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf7mm\" (UniqueName: \"kubernetes.io/projected/76b33613-bb4c-4e62-9574-4372603edc01-kube-api-access-sf7mm\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:56 crc kubenswrapper[4695]: I1126 14:00:56.809324 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:00:57 crc kubenswrapper[4695]: I1126 14:00:57.342960 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7"] Nov 26 14:00:57 crc kubenswrapper[4695]: I1126 14:00:57.403132 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" event={"ID":"76b33613-bb4c-4e62-9574-4372603edc01","Type":"ContainerStarted","Data":"a90073eefcd0969dc275c05cb450ae44ca13dbc9adbb44fec773f2bf5f8c716f"} Nov 26 14:00:58 crc kubenswrapper[4695]: I1126 14:00:58.411242 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" event={"ID":"76b33613-bb4c-4e62-9574-4372603edc01","Type":"ContainerStarted","Data":"1285a85a4563a13a3a8ce42eae390f30867640b35a0b629dd6e0b4488e3609f6"} Nov 26 14:00:58 crc kubenswrapper[4695]: I1126 14:00:58.431385 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" podStartSLOduration=1.816828434 podStartE2EDuration="2.431364205s" podCreationTimestamp="2025-11-26 14:00:56 +0000 UTC" firstStartedPulling="2025-11-26 14:00:57.347947292 +0000 UTC m=+2240.983772374" lastFinishedPulling="2025-11-26 14:00:57.962483063 +0000 UTC m=+2241.598308145" observedRunningTime="2025-11-26 14:00:58.425580271 +0000 UTC m=+2242.061405353" watchObservedRunningTime="2025-11-26 14:00:58.431364205 +0000 UTC m=+2242.067189287" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.130343 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29402761-sd4pc"] Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.132165 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.142215 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29402761-sd4pc"] Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.206143 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-fernet-keys\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.206204 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-config-data\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.206269 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-combined-ca-bundle\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.206421 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfbml\" (UniqueName: \"kubernetes.io/projected/ab5969ee-b42f-466b-9087-adf2da1d7785-kube-api-access-zfbml\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.308103 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfbml\" (UniqueName: \"kubernetes.io/projected/ab5969ee-b42f-466b-9087-adf2da1d7785-kube-api-access-zfbml\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.308261 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-fernet-keys\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.308289 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-config-data\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.308311 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-combined-ca-bundle\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.314592 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-config-data\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.315376 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-fernet-keys\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.316025 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-combined-ca-bundle\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.325484 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfbml\" (UniqueName: \"kubernetes.io/projected/ab5969ee-b42f-466b-9087-adf2da1d7785-kube-api-access-zfbml\") pod \"keystone-cron-29402761-sd4pc\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.464304 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:00 crc kubenswrapper[4695]: I1126 14:01:00.906761 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29402761-sd4pc"] Nov 26 14:01:00 crc kubenswrapper[4695]: W1126 14:01:00.906900 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab5969ee_b42f_466b_9087_adf2da1d7785.slice/crio-7629b0b7d07efb00652c2b177583dcca7e136ab6c3c20bd9a618a078d9515d7d WatchSource:0}: Error finding container 7629b0b7d07efb00652c2b177583dcca7e136ab6c3c20bd9a618a078d9515d7d: Status 404 returned error can't find the container with id 7629b0b7d07efb00652c2b177583dcca7e136ab6c3c20bd9a618a078d9515d7d Nov 26 14:01:01 crc kubenswrapper[4695]: I1126 14:01:01.440694 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402761-sd4pc" event={"ID":"ab5969ee-b42f-466b-9087-adf2da1d7785","Type":"ContainerStarted","Data":"102b4c8b029ba04a10f8353f8ffc150c4dc8c4778004d76508a268d607b168f2"} Nov 26 14:01:01 crc kubenswrapper[4695]: I1126 14:01:01.441004 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402761-sd4pc" event={"ID":"ab5969ee-b42f-466b-9087-adf2da1d7785","Type":"ContainerStarted","Data":"7629b0b7d07efb00652c2b177583dcca7e136ab6c3c20bd9a618a078d9515d7d"} Nov 26 14:01:01 crc kubenswrapper[4695]: I1126 14:01:01.467121 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29402761-sd4pc" podStartSLOduration=1.467103479 podStartE2EDuration="1.467103479s" podCreationTimestamp="2025-11-26 14:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 14:01:01.454097318 +0000 UTC m=+2245.089922410" watchObservedRunningTime="2025-11-26 14:01:01.467103479 +0000 UTC m=+2245.102928561" Nov 26 14:01:03 crc kubenswrapper[4695]: I1126 14:01:03.459828 4695 generic.go:334] "Generic (PLEG): container finished" podID="ab5969ee-b42f-466b-9087-adf2da1d7785" containerID="102b4c8b029ba04a10f8353f8ffc150c4dc8c4778004d76508a268d607b168f2" exitCode=0 Nov 26 14:01:03 crc kubenswrapper[4695]: I1126 14:01:03.459882 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402761-sd4pc" event={"ID":"ab5969ee-b42f-466b-9087-adf2da1d7785","Type":"ContainerDied","Data":"102b4c8b029ba04a10f8353f8ffc150c4dc8c4778004d76508a268d607b168f2"} Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.775305 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.901303 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-config-data\") pod \"ab5969ee-b42f-466b-9087-adf2da1d7785\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.901386 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfbml\" (UniqueName: \"kubernetes.io/projected/ab5969ee-b42f-466b-9087-adf2da1d7785-kube-api-access-zfbml\") pod \"ab5969ee-b42f-466b-9087-adf2da1d7785\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.901419 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-combined-ca-bundle\") pod \"ab5969ee-b42f-466b-9087-adf2da1d7785\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.901475 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-fernet-keys\") pod \"ab5969ee-b42f-466b-9087-adf2da1d7785\" (UID: \"ab5969ee-b42f-466b-9087-adf2da1d7785\") " Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.906446 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ab5969ee-b42f-466b-9087-adf2da1d7785" (UID: "ab5969ee-b42f-466b-9087-adf2da1d7785"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.913618 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab5969ee-b42f-466b-9087-adf2da1d7785-kube-api-access-zfbml" (OuterVolumeSpecName: "kube-api-access-zfbml") pod "ab5969ee-b42f-466b-9087-adf2da1d7785" (UID: "ab5969ee-b42f-466b-9087-adf2da1d7785"). InnerVolumeSpecName "kube-api-access-zfbml". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.930699 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab5969ee-b42f-466b-9087-adf2da1d7785" (UID: "ab5969ee-b42f-466b-9087-adf2da1d7785"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:01:04 crc kubenswrapper[4695]: I1126 14:01:04.948389 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-config-data" (OuterVolumeSpecName: "config-data") pod "ab5969ee-b42f-466b-9087-adf2da1d7785" (UID: "ab5969ee-b42f-466b-9087-adf2da1d7785"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:01:05 crc kubenswrapper[4695]: I1126 14:01:05.004284 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 14:01:05 crc kubenswrapper[4695]: I1126 14:01:05.004328 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfbml\" (UniqueName: \"kubernetes.io/projected/ab5969ee-b42f-466b-9087-adf2da1d7785-kube-api-access-zfbml\") on node \"crc\" DevicePath \"\"" Nov 26 14:01:05 crc kubenswrapper[4695]: I1126 14:01:05.004340 4695 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 14:01:05 crc kubenswrapper[4695]: I1126 14:01:05.004369 4695 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ab5969ee-b42f-466b-9087-adf2da1d7785-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 26 14:01:05 crc kubenswrapper[4695]: E1126 14:01:05.255403 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab5969ee_b42f_466b_9087_adf2da1d7785.slice\": RecentStats: unable to find data in memory cache]" Nov 26 14:01:05 crc kubenswrapper[4695]: I1126 14:01:05.477184 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29402761-sd4pc" event={"ID":"ab5969ee-b42f-466b-9087-adf2da1d7785","Type":"ContainerDied","Data":"7629b0b7d07efb00652c2b177583dcca7e136ab6c3c20bd9a618a078d9515d7d"} Nov 26 14:01:05 crc kubenswrapper[4695]: I1126 14:01:05.477574 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7629b0b7d07efb00652c2b177583dcca7e136ab6c3c20bd9a618a078d9515d7d" Nov 26 14:01:05 crc kubenswrapper[4695]: I1126 14:01:05.477221 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29402761-sd4pc" Nov 26 14:01:06 crc kubenswrapper[4695]: I1126 14:01:06.396956 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:01:06 crc kubenswrapper[4695]: I1126 14:01:06.397014 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:01:36 crc kubenswrapper[4695]: I1126 14:01:36.397041 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:01:36 crc kubenswrapper[4695]: I1126 14:01:36.397656 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:01:36 crc kubenswrapper[4695]: I1126 14:01:36.397713 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 14:01:36 crc kubenswrapper[4695]: I1126 14:01:36.398537 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 14:01:36 crc kubenswrapper[4695]: I1126 14:01:36.398587 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" gracePeriod=600 Nov 26 14:01:36 crc kubenswrapper[4695]: E1126 14:01:36.525035 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:01:37 crc kubenswrapper[4695]: I1126 14:01:37.124888 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" exitCode=0 Nov 26 14:01:37 crc kubenswrapper[4695]: I1126 14:01:37.124924 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993"} Nov 26 14:01:37 crc kubenswrapper[4695]: I1126 14:01:37.125242 4695 scope.go:117] "RemoveContainer" containerID="9054ffcb43e1cd20adf0a64a79d1ce9e9ce7c6483384269e4ffce8dac0186885" Nov 26 14:01:37 crc kubenswrapper[4695]: I1126 14:01:37.125863 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:01:37 crc kubenswrapper[4695]: E1126 14:01:37.126135 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:01:51 crc kubenswrapper[4695]: I1126 14:01:51.162430 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:01:51 crc kubenswrapper[4695]: E1126 14:01:51.163890 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:02:03 crc kubenswrapper[4695]: I1126 14:02:03.162144 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:02:03 crc kubenswrapper[4695]: E1126 14:02:03.162911 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:02:18 crc kubenswrapper[4695]: I1126 14:02:18.162986 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:02:18 crc kubenswrapper[4695]: E1126 14:02:18.163955 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:02:30 crc kubenswrapper[4695]: I1126 14:02:30.163098 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:02:30 crc kubenswrapper[4695]: E1126 14:02:30.164017 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:02:41 crc kubenswrapper[4695]: I1126 14:02:41.163050 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:02:41 crc kubenswrapper[4695]: E1126 14:02:41.164578 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:02:54 crc kubenswrapper[4695]: I1126 14:02:54.162083 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:02:54 crc kubenswrapper[4695]: E1126 14:02:54.163017 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:03:07 crc kubenswrapper[4695]: I1126 14:03:07.174927 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:03:07 crc kubenswrapper[4695]: E1126 14:03:07.176577 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:03:21 crc kubenswrapper[4695]: I1126 14:03:21.162865 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:03:21 crc kubenswrapper[4695]: E1126 14:03:21.164057 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:03:33 crc kubenswrapper[4695]: I1126 14:03:33.163042 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:03:33 crc kubenswrapper[4695]: E1126 14:03:33.164568 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:03:45 crc kubenswrapper[4695]: I1126 14:03:45.162866 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:03:45 crc kubenswrapper[4695]: E1126 14:03:45.163748 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:03:57 crc kubenswrapper[4695]: I1126 14:03:57.168631 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:03:57 crc kubenswrapper[4695]: E1126 14:03:57.169526 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:04:12 crc kubenswrapper[4695]: I1126 14:04:12.164057 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:04:12 crc kubenswrapper[4695]: E1126 14:04:12.165906 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:04:25 crc kubenswrapper[4695]: I1126 14:04:25.163134 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:04:25 crc kubenswrapper[4695]: E1126 14:04:25.164444 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:04:40 crc kubenswrapper[4695]: I1126 14:04:40.171769 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:04:40 crc kubenswrapper[4695]: E1126 14:04:40.173921 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:04:52 crc kubenswrapper[4695]: I1126 14:04:52.162397 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:04:52 crc kubenswrapper[4695]: E1126 14:04:52.163262 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:05:06 crc kubenswrapper[4695]: I1126 14:05:06.163217 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:05:06 crc kubenswrapper[4695]: E1126 14:05:06.164762 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:05:06 crc kubenswrapper[4695]: I1126 14:05:06.368506 4695 generic.go:334] "Generic (PLEG): container finished" podID="76b33613-bb4c-4e62-9574-4372603edc01" containerID="1285a85a4563a13a3a8ce42eae390f30867640b35a0b629dd6e0b4488e3609f6" exitCode=0 Nov 26 14:05:06 crc kubenswrapper[4695]: I1126 14:05:06.368533 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" event={"ID":"76b33613-bb4c-4e62-9574-4372603edc01","Type":"ContainerDied","Data":"1285a85a4563a13a3a8ce42eae390f30867640b35a0b629dd6e0b4488e3609f6"} Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.774248 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.971630 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-secret-0\") pod \"76b33613-bb4c-4e62-9574-4372603edc01\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.972128 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-inventory\") pod \"76b33613-bb4c-4e62-9574-4372603edc01\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.972226 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf7mm\" (UniqueName: \"kubernetes.io/projected/76b33613-bb4c-4e62-9574-4372603edc01-kube-api-access-sf7mm\") pod \"76b33613-bb4c-4e62-9574-4372603edc01\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.972295 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-combined-ca-bundle\") pod \"76b33613-bb4c-4e62-9574-4372603edc01\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.972326 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-ssh-key\") pod \"76b33613-bb4c-4e62-9574-4372603edc01\" (UID: \"76b33613-bb4c-4e62-9574-4372603edc01\") " Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.978337 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76b33613-bb4c-4e62-9574-4372603edc01-kube-api-access-sf7mm" (OuterVolumeSpecName: "kube-api-access-sf7mm") pod "76b33613-bb4c-4e62-9574-4372603edc01" (UID: "76b33613-bb4c-4e62-9574-4372603edc01"). InnerVolumeSpecName "kube-api-access-sf7mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:05:07 crc kubenswrapper[4695]: I1126 14:05:07.978632 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "76b33613-bb4c-4e62-9574-4372603edc01" (UID: "76b33613-bb4c-4e62-9574-4372603edc01"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.000952 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-inventory" (OuterVolumeSpecName: "inventory") pod "76b33613-bb4c-4e62-9574-4372603edc01" (UID: "76b33613-bb4c-4e62-9574-4372603edc01"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.008856 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "76b33613-bb4c-4e62-9574-4372603edc01" (UID: "76b33613-bb4c-4e62-9574-4372603edc01"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.014084 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "76b33613-bb4c-4e62-9574-4372603edc01" (UID: "76b33613-bb4c-4e62-9574-4372603edc01"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.074275 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf7mm\" (UniqueName: \"kubernetes.io/projected/76b33613-bb4c-4e62-9574-4372603edc01-kube-api-access-sf7mm\") on node \"crc\" DevicePath \"\"" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.074318 4695 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.074331 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.074355 4695 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.074370 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/76b33613-bb4c-4e62-9574-4372603edc01-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.389897 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" event={"ID":"76b33613-bb4c-4e62-9574-4372603edc01","Type":"ContainerDied","Data":"a90073eefcd0969dc275c05cb450ae44ca13dbc9adbb44fec773f2bf5f8c716f"} Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.389947 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a90073eefcd0969dc275c05cb450ae44ca13dbc9adbb44fec773f2bf5f8c716f" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.389951 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.478480 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6"] Nov 26 14:05:08 crc kubenswrapper[4695]: E1126 14:05:08.478995 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76b33613-bb4c-4e62-9574-4372603edc01" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.479017 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="76b33613-bb4c-4e62-9574-4372603edc01" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 26 14:05:08 crc kubenswrapper[4695]: E1126 14:05:08.479053 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab5969ee-b42f-466b-9087-adf2da1d7785" containerName="keystone-cron" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.479061 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab5969ee-b42f-466b-9087-adf2da1d7785" containerName="keystone-cron" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.479308 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="76b33613-bb4c-4e62-9574-4372603edc01" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.479343 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab5969ee-b42f-466b-9087-adf2da1d7785" containerName="keystone-cron" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.480117 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.500280 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.500812 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.501632 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.501723 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.512161 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.512787 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.513045 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.529636 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6"] Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.594025 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.594098 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.595167 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.595532 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jb629\" (UniqueName: \"kubernetes.io/projected/9f43b614-f241-4689-b15b-26bdf3d6e72d-kube-api-access-jb629\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.595698 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.595877 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.595981 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.596673 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.596724 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697523 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697594 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697691 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697719 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697773 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697809 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697858 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697887 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jb629\" (UniqueName: \"kubernetes.io/projected/9f43b614-f241-4689-b15b-26bdf3d6e72d-kube-api-access-jb629\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.697928 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.699575 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.703145 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.703324 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.703744 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.703892 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.703918 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.704327 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.712979 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.719979 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jb629\" (UniqueName: \"kubernetes.io/projected/9f43b614-f241-4689-b15b-26bdf3d6e72d-kube-api-access-jb629\") pod \"nova-edpm-deployment-openstack-edpm-ipam-2jzr6\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:08 crc kubenswrapper[4695]: I1126 14:05:08.803960 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:05:09 crc kubenswrapper[4695]: I1126 14:05:09.345512 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6"] Nov 26 14:05:09 crc kubenswrapper[4695]: I1126 14:05:09.355111 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 14:05:09 crc kubenswrapper[4695]: I1126 14:05:09.399581 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" event={"ID":"9f43b614-f241-4689-b15b-26bdf3d6e72d","Type":"ContainerStarted","Data":"386e1d0acde1e6862a68b86e6105da8bbd39d36f3fe24497f01fab397e99dc9e"} Nov 26 14:05:11 crc kubenswrapper[4695]: I1126 14:05:11.418593 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" event={"ID":"9f43b614-f241-4689-b15b-26bdf3d6e72d","Type":"ContainerStarted","Data":"57394e3b8629a81c0ce9a2c00f2a2aa47868ec45b0d5aeb9512ee4f95c2ecc0d"} Nov 26 14:05:11 crc kubenswrapper[4695]: I1126 14:05:11.450340 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" podStartSLOduration=1.834088897 podStartE2EDuration="3.450319748s" podCreationTimestamp="2025-11-26 14:05:08 +0000 UTC" firstStartedPulling="2025-11-26 14:05:09.354932753 +0000 UTC m=+2492.990757835" lastFinishedPulling="2025-11-26 14:05:10.971163614 +0000 UTC m=+2494.606988686" observedRunningTime="2025-11-26 14:05:11.442658054 +0000 UTC m=+2495.078483156" watchObservedRunningTime="2025-11-26 14:05:11.450319748 +0000 UTC m=+2495.086144830" Nov 26 14:05:19 crc kubenswrapper[4695]: I1126 14:05:19.162287 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:05:19 crc kubenswrapper[4695]: E1126 14:05:19.163181 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:05:30 crc kubenswrapper[4695]: I1126 14:05:30.162521 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:05:30 crc kubenswrapper[4695]: E1126 14:05:30.163269 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:05:45 crc kubenswrapper[4695]: I1126 14:05:45.163362 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:05:45 crc kubenswrapper[4695]: E1126 14:05:45.164686 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.362473 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zj6fb"] Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.364793 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.441135 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-utilities\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.441190 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-catalog-content\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.441483 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p89q\" (UniqueName: \"kubernetes.io/projected/014feca1-84fe-4c78-a3b3-d871ac6f5588-kube-api-access-8p89q\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.448969 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zj6fb"] Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.543491 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p89q\" (UniqueName: \"kubernetes.io/projected/014feca1-84fe-4c78-a3b3-d871ac6f5588-kube-api-access-8p89q\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.543865 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-utilities\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.543988 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-catalog-content\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.544438 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-utilities\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.544568 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-catalog-content\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.567693 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p89q\" (UniqueName: \"kubernetes.io/projected/014feca1-84fe-4c78-a3b3-d871ac6f5588-kube-api-access-8p89q\") pod \"redhat-operators-zj6fb\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:48 crc kubenswrapper[4695]: I1126 14:05:48.684864 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:49 crc kubenswrapper[4695]: I1126 14:05:49.144473 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zj6fb"] Nov 26 14:05:49 crc kubenswrapper[4695]: I1126 14:05:49.827043 4695 generic.go:334] "Generic (PLEG): container finished" podID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerID="5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d" exitCode=0 Nov 26 14:05:49 crc kubenswrapper[4695]: I1126 14:05:49.827193 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj6fb" event={"ID":"014feca1-84fe-4c78-a3b3-d871ac6f5588","Type":"ContainerDied","Data":"5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d"} Nov 26 14:05:49 crc kubenswrapper[4695]: I1126 14:05:49.827311 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj6fb" event={"ID":"014feca1-84fe-4c78-a3b3-d871ac6f5588","Type":"ContainerStarted","Data":"5274ad753ab1dc9a9e57fd99e50bae56003f6f01cfdfefe2f0aaf1b105e262f1"} Nov 26 14:05:51 crc kubenswrapper[4695]: I1126 14:05:51.851802 4695 generic.go:334] "Generic (PLEG): container finished" podID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerID="9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165" exitCode=0 Nov 26 14:05:51 crc kubenswrapper[4695]: I1126 14:05:51.851924 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj6fb" event={"ID":"014feca1-84fe-4c78-a3b3-d871ac6f5588","Type":"ContainerDied","Data":"9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165"} Nov 26 14:05:52 crc kubenswrapper[4695]: I1126 14:05:52.863860 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj6fb" event={"ID":"014feca1-84fe-4c78-a3b3-d871ac6f5588","Type":"ContainerStarted","Data":"e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0"} Nov 26 14:05:52 crc kubenswrapper[4695]: I1126 14:05:52.882025 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zj6fb" podStartSLOduration=2.276211863 podStartE2EDuration="4.88200278s" podCreationTimestamp="2025-11-26 14:05:48 +0000 UTC" firstStartedPulling="2025-11-26 14:05:49.8301935 +0000 UTC m=+2533.466018582" lastFinishedPulling="2025-11-26 14:05:52.435984417 +0000 UTC m=+2536.071809499" observedRunningTime="2025-11-26 14:05:52.877855304 +0000 UTC m=+2536.513680396" watchObservedRunningTime="2025-11-26 14:05:52.88200278 +0000 UTC m=+2536.517827862" Nov 26 14:05:58 crc kubenswrapper[4695]: I1126 14:05:58.162815 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:05:58 crc kubenswrapper[4695]: E1126 14:05:58.163412 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:05:58 crc kubenswrapper[4695]: I1126 14:05:58.685529 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:58 crc kubenswrapper[4695]: I1126 14:05:58.685590 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:58 crc kubenswrapper[4695]: I1126 14:05:58.735962 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:58 crc kubenswrapper[4695]: I1126 14:05:58.959101 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:05:59 crc kubenswrapper[4695]: I1126 14:05:59.006972 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zj6fb"] Nov 26 14:06:00 crc kubenswrapper[4695]: I1126 14:06:00.931630 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zj6fb" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="registry-server" containerID="cri-o://e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0" gracePeriod=2 Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.382916 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.499012 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-catalog-content\") pod \"014feca1-84fe-4c78-a3b3-d871ac6f5588\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.499166 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-utilities\") pod \"014feca1-84fe-4c78-a3b3-d871ac6f5588\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.499447 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p89q\" (UniqueName: \"kubernetes.io/projected/014feca1-84fe-4c78-a3b3-d871ac6f5588-kube-api-access-8p89q\") pod \"014feca1-84fe-4c78-a3b3-d871ac6f5588\" (UID: \"014feca1-84fe-4c78-a3b3-d871ac6f5588\") " Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.499889 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-utilities" (OuterVolumeSpecName: "utilities") pod "014feca1-84fe-4c78-a3b3-d871ac6f5588" (UID: "014feca1-84fe-4c78-a3b3-d871ac6f5588"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.500817 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.506947 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014feca1-84fe-4c78-a3b3-d871ac6f5588-kube-api-access-8p89q" (OuterVolumeSpecName: "kube-api-access-8p89q") pod "014feca1-84fe-4c78-a3b3-d871ac6f5588" (UID: "014feca1-84fe-4c78-a3b3-d871ac6f5588"). InnerVolumeSpecName "kube-api-access-8p89q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.603544 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p89q\" (UniqueName: \"kubernetes.io/projected/014feca1-84fe-4c78-a3b3-d871ac6f5588-kube-api-access-8p89q\") on node \"crc\" DevicePath \"\"" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.941740 4695 generic.go:334] "Generic (PLEG): container finished" podID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerID="e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0" exitCode=0 Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.941789 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj6fb" event={"ID":"014feca1-84fe-4c78-a3b3-d871ac6f5588","Type":"ContainerDied","Data":"e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0"} Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.942021 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zj6fb" event={"ID":"014feca1-84fe-4c78-a3b3-d871ac6f5588","Type":"ContainerDied","Data":"5274ad753ab1dc9a9e57fd99e50bae56003f6f01cfdfefe2f0aaf1b105e262f1"} Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.942052 4695 scope.go:117] "RemoveContainer" containerID="e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.941826 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zj6fb" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.960012 4695 scope.go:117] "RemoveContainer" containerID="9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165" Nov 26 14:06:01 crc kubenswrapper[4695]: I1126 14:06:01.978981 4695 scope.go:117] "RemoveContainer" containerID="5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.023527 4695 scope.go:117] "RemoveContainer" containerID="e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0" Nov 26 14:06:02 crc kubenswrapper[4695]: E1126 14:06:02.023956 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0\": container with ID starting with e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0 not found: ID does not exist" containerID="e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.024002 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0"} err="failed to get container status \"e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0\": rpc error: code = NotFound desc = could not find container \"e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0\": container with ID starting with e0d244ad81ce2e40fc89b7a670fa2f8c7049583e869801802841b2e8b336a9d0 not found: ID does not exist" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.024026 4695 scope.go:117] "RemoveContainer" containerID="9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165" Nov 26 14:06:02 crc kubenswrapper[4695]: E1126 14:06:02.024425 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165\": container with ID starting with 9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165 not found: ID does not exist" containerID="9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.024449 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165"} err="failed to get container status \"9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165\": rpc error: code = NotFound desc = could not find container \"9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165\": container with ID starting with 9d8d27f4cdca2978f50cf08507509f2e2e990b738efda1e609fbd4b2bf9d1165 not found: ID does not exist" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.024464 4695 scope.go:117] "RemoveContainer" containerID="5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d" Nov 26 14:06:02 crc kubenswrapper[4695]: E1126 14:06:02.024754 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d\": container with ID starting with 5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d not found: ID does not exist" containerID="5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.024772 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d"} err="failed to get container status \"5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d\": rpc error: code = NotFound desc = could not find container \"5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d\": container with ID starting with 5f35846f16d4667628a4fcd5867753ef672aad634d4db1f9647181e4d2c10c1d not found: ID does not exist" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.162212 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "014feca1-84fe-4c78-a3b3-d871ac6f5588" (UID: "014feca1-84fe-4c78-a3b3-d871ac6f5588"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.215721 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/014feca1-84fe-4c78-a3b3-d871ac6f5588-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.275359 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zj6fb"] Nov 26 14:06:02 crc kubenswrapper[4695]: I1126 14:06:02.286258 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zj6fb"] Nov 26 14:06:03 crc kubenswrapper[4695]: I1126 14:06:03.173239 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" path="/var/lib/kubelet/pods/014feca1-84fe-4c78-a3b3-d871ac6f5588/volumes" Nov 26 14:06:12 crc kubenswrapper[4695]: I1126 14:06:12.163788 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:06:12 crc kubenswrapper[4695]: E1126 14:06:12.164599 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:06:26 crc kubenswrapper[4695]: I1126 14:06:26.162973 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:06:26 crc kubenswrapper[4695]: E1126 14:06:26.163699 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:06:41 crc kubenswrapper[4695]: I1126 14:06:41.163601 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:06:42 crc kubenswrapper[4695]: I1126 14:06:42.303675 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"a5bc004105bd9a616745ebfd24ff38bede76fc786d06034362fdca56f0ac5619"} Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.046692 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gpt6r"] Nov 26 14:07:08 crc kubenswrapper[4695]: E1126 14:07:08.049178 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="registry-server" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.049204 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="registry-server" Nov 26 14:07:08 crc kubenswrapper[4695]: E1126 14:07:08.049248 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="extract-utilities" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.049278 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="extract-utilities" Nov 26 14:07:08 crc kubenswrapper[4695]: E1126 14:07:08.049323 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="extract-content" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.049336 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="extract-content" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.049687 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="014feca1-84fe-4c78-a3b3-d871ac6f5588" containerName="registry-server" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.052770 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.060141 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gpt6r"] Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.231984 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-utilities\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.232088 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9q4\" (UniqueName: \"kubernetes.io/projected/52974fac-558b-498b-9cee-54fefc48d057-kube-api-access-nc9q4\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.232574 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-catalog-content\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.335006 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-catalog-content\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.335260 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-utilities\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.335331 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9q4\" (UniqueName: \"kubernetes.io/projected/52974fac-558b-498b-9cee-54fefc48d057-kube-api-access-nc9q4\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.335820 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-catalog-content\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.336129 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-utilities\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.365829 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9q4\" (UniqueName: \"kubernetes.io/projected/52974fac-558b-498b-9cee-54fefc48d057-kube-api-access-nc9q4\") pod \"redhat-marketplace-gpt6r\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:08 crc kubenswrapper[4695]: I1126 14:07:08.372710 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:09 crc kubenswrapper[4695]: I1126 14:07:09.061514 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gpt6r"] Nov 26 14:07:09 crc kubenswrapper[4695]: I1126 14:07:09.570202 4695 generic.go:334] "Generic (PLEG): container finished" podID="52974fac-558b-498b-9cee-54fefc48d057" containerID="f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77" exitCode=0 Nov 26 14:07:09 crc kubenswrapper[4695]: I1126 14:07:09.570301 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpt6r" event={"ID":"52974fac-558b-498b-9cee-54fefc48d057","Type":"ContainerDied","Data":"f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77"} Nov 26 14:07:09 crc kubenswrapper[4695]: I1126 14:07:09.570523 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpt6r" event={"ID":"52974fac-558b-498b-9cee-54fefc48d057","Type":"ContainerStarted","Data":"640c9e7c0a28219f9a5b3d28c324d0711df124147b1bc2536a4f56ef4b0f5579"} Nov 26 14:07:11 crc kubenswrapper[4695]: I1126 14:07:11.592082 4695 generic.go:334] "Generic (PLEG): container finished" podID="52974fac-558b-498b-9cee-54fefc48d057" containerID="9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da" exitCode=0 Nov 26 14:07:11 crc kubenswrapper[4695]: I1126 14:07:11.592184 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpt6r" event={"ID":"52974fac-558b-498b-9cee-54fefc48d057","Type":"ContainerDied","Data":"9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da"} Nov 26 14:07:13 crc kubenswrapper[4695]: I1126 14:07:13.614601 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpt6r" event={"ID":"52974fac-558b-498b-9cee-54fefc48d057","Type":"ContainerStarted","Data":"6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e"} Nov 26 14:07:13 crc kubenswrapper[4695]: I1126 14:07:13.633015 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gpt6r" podStartSLOduration=2.482724546 podStartE2EDuration="5.632998429s" podCreationTimestamp="2025-11-26 14:07:08 +0000 UTC" firstStartedPulling="2025-11-26 14:07:09.573492272 +0000 UTC m=+2613.209317354" lastFinishedPulling="2025-11-26 14:07:12.723766155 +0000 UTC m=+2616.359591237" observedRunningTime="2025-11-26 14:07:13.631522542 +0000 UTC m=+2617.267347624" watchObservedRunningTime="2025-11-26 14:07:13.632998429 +0000 UTC m=+2617.268823511" Nov 26 14:07:18 crc kubenswrapper[4695]: I1126 14:07:18.372906 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:18 crc kubenswrapper[4695]: I1126 14:07:18.373523 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:18 crc kubenswrapper[4695]: I1126 14:07:18.424437 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:18 crc kubenswrapper[4695]: I1126 14:07:18.714825 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:18 crc kubenswrapper[4695]: I1126 14:07:18.789764 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gpt6r"] Nov 26 14:07:20 crc kubenswrapper[4695]: I1126 14:07:20.688006 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gpt6r" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="registry-server" containerID="cri-o://6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e" gracePeriod=2 Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.171402 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.333910 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nc9q4\" (UniqueName: \"kubernetes.io/projected/52974fac-558b-498b-9cee-54fefc48d057-kube-api-access-nc9q4\") pod \"52974fac-558b-498b-9cee-54fefc48d057\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.334546 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-utilities\") pod \"52974fac-558b-498b-9cee-54fefc48d057\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.334672 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-catalog-content\") pod \"52974fac-558b-498b-9cee-54fefc48d057\" (UID: \"52974fac-558b-498b-9cee-54fefc48d057\") " Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.341081 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-utilities" (OuterVolumeSpecName: "utilities") pod "52974fac-558b-498b-9cee-54fefc48d057" (UID: "52974fac-558b-498b-9cee-54fefc48d057"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.352746 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52974fac-558b-498b-9cee-54fefc48d057-kube-api-access-nc9q4" (OuterVolumeSpecName: "kube-api-access-nc9q4") pod "52974fac-558b-498b-9cee-54fefc48d057" (UID: "52974fac-558b-498b-9cee-54fefc48d057"). InnerVolumeSpecName "kube-api-access-nc9q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.437821 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nc9q4\" (UniqueName: \"kubernetes.io/projected/52974fac-558b-498b-9cee-54fefc48d057-kube-api-access-nc9q4\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.437865 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.701619 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpt6r" event={"ID":"52974fac-558b-498b-9cee-54fefc48d057","Type":"ContainerDied","Data":"6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e"} Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.701680 4695 scope.go:117] "RemoveContainer" containerID="6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.701881 4695 generic.go:334] "Generic (PLEG): container finished" podID="52974fac-558b-498b-9cee-54fefc48d057" containerID="6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e" exitCode=0 Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.701914 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gpt6r" event={"ID":"52974fac-558b-498b-9cee-54fefc48d057","Type":"ContainerDied","Data":"640c9e7c0a28219f9a5b3d28c324d0711df124147b1bc2536a4f56ef4b0f5579"} Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.702526 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gpt6r" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.718994 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52974fac-558b-498b-9cee-54fefc48d057" (UID: "52974fac-558b-498b-9cee-54fefc48d057"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.724117 4695 scope.go:117] "RemoveContainer" containerID="9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.744127 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52974fac-558b-498b-9cee-54fefc48d057-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.756757 4695 scope.go:117] "RemoveContainer" containerID="f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.801411 4695 scope.go:117] "RemoveContainer" containerID="6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e" Nov 26 14:07:21 crc kubenswrapper[4695]: E1126 14:07:21.801996 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e\": container with ID starting with 6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e not found: ID does not exist" containerID="6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.802044 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e"} err="failed to get container status \"6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e\": rpc error: code = NotFound desc = could not find container \"6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e\": container with ID starting with 6f5c485c98957c7fde9b452f318f097de43461584dbb11fe675633bb59e4245e not found: ID does not exist" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.802094 4695 scope.go:117] "RemoveContainer" containerID="9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da" Nov 26 14:07:21 crc kubenswrapper[4695]: E1126 14:07:21.802467 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da\": container with ID starting with 9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da not found: ID does not exist" containerID="9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.802520 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da"} err="failed to get container status \"9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da\": rpc error: code = NotFound desc = could not find container \"9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da\": container with ID starting with 9201ec35c377b7c925c53f1802c76b839abfd31de2c8258bc52741202ebf23da not found: ID does not exist" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.802550 4695 scope.go:117] "RemoveContainer" containerID="f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77" Nov 26 14:07:21 crc kubenswrapper[4695]: E1126 14:07:21.802963 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77\": container with ID starting with f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77 not found: ID does not exist" containerID="f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77" Nov 26 14:07:21 crc kubenswrapper[4695]: I1126 14:07:21.802991 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77"} err="failed to get container status \"f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77\": rpc error: code = NotFound desc = could not find container \"f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77\": container with ID starting with f8e44c1b969da0f968f3cb95a50d0bbdc4e1e3ed4621e6aa4efa8499d0f89e77 not found: ID does not exist" Nov 26 14:07:22 crc kubenswrapper[4695]: I1126 14:07:22.059149 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gpt6r"] Nov 26 14:07:22 crc kubenswrapper[4695]: I1126 14:07:22.072782 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gpt6r"] Nov 26 14:07:23 crc kubenswrapper[4695]: I1126 14:07:23.174957 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52974fac-558b-498b-9cee-54fefc48d057" path="/var/lib/kubelet/pods/52974fac-558b-498b-9cee-54fefc48d057/volumes" Nov 26 14:07:50 crc kubenswrapper[4695]: I1126 14:07:50.959108 4695 generic.go:334] "Generic (PLEG): container finished" podID="9f43b614-f241-4689-b15b-26bdf3d6e72d" containerID="57394e3b8629a81c0ce9a2c00f2a2aa47868ec45b0d5aeb9512ee4f95c2ecc0d" exitCode=0 Nov 26 14:07:50 crc kubenswrapper[4695]: I1126 14:07:50.959206 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" event={"ID":"9f43b614-f241-4689-b15b-26bdf3d6e72d","Type":"ContainerDied","Data":"57394e3b8629a81c0ce9a2c00f2a2aa47868ec45b0d5aeb9512ee4f95c2ecc0d"} Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.424245 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.548781 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-1\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.548850 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-extra-config-0\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.549054 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-0\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.549080 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-combined-ca-bundle\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.549101 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-0\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.549194 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-ssh-key\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.549232 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jb629\" (UniqueName: \"kubernetes.io/projected/9f43b614-f241-4689-b15b-26bdf3d6e72d-kube-api-access-jb629\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.549301 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-1\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.549328 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-inventory\") pod \"9f43b614-f241-4689-b15b-26bdf3d6e72d\" (UID: \"9f43b614-f241-4689-b15b-26bdf3d6e72d\") " Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.556142 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f43b614-f241-4689-b15b-26bdf3d6e72d-kube-api-access-jb629" (OuterVolumeSpecName: "kube-api-access-jb629") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "kube-api-access-jb629". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.569249 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.576805 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.586451 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.586865 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.588250 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.588814 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.589261 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.607574 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-inventory" (OuterVolumeSpecName: "inventory") pod "9f43b614-f241-4689-b15b-26bdf3d6e72d" (UID: "9f43b614-f241-4689-b15b-26bdf3d6e72d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651884 4695 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651932 4695 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651947 4695 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651960 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651970 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jb629\" (UniqueName: \"kubernetes.io/projected/9f43b614-f241-4689-b15b-26bdf3d6e72d-kube-api-access-jb629\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651979 4695 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651988 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.651998 4695 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.652009 4695 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/9f43b614-f241-4689-b15b-26bdf3d6e72d-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.977764 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" event={"ID":"9f43b614-f241-4689-b15b-26bdf3d6e72d","Type":"ContainerDied","Data":"386e1d0acde1e6862a68b86e6105da8bbd39d36f3fe24497f01fab397e99dc9e"} Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.977810 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="386e1d0acde1e6862a68b86e6105da8bbd39d36f3fe24497f01fab397e99dc9e" Nov 26 14:07:52 crc kubenswrapper[4695]: I1126 14:07:52.978072 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-2jzr6" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.070861 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs"] Nov 26 14:07:53 crc kubenswrapper[4695]: E1126 14:07:53.071279 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="registry-server" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.071298 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="registry-server" Nov 26 14:07:53 crc kubenswrapper[4695]: E1126 14:07:53.071324 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="extract-utilities" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.071380 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="extract-utilities" Nov 26 14:07:53 crc kubenswrapper[4695]: E1126 14:07:53.071400 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="extract-content" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.071408 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="extract-content" Nov 26 14:07:53 crc kubenswrapper[4695]: E1126 14:07:53.071435 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f43b614-f241-4689-b15b-26bdf3d6e72d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.071441 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f43b614-f241-4689-b15b-26bdf3d6e72d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.071644 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f43b614-f241-4689-b15b-26bdf3d6e72d" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.071666 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="52974fac-558b-498b-9cee-54fefc48d057" containerName="registry-server" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.072426 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.074767 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.074810 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.075068 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.076031 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-slrz7" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.086536 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.087700 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs"] Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.267213 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.267659 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.267686 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.267878 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.267934 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxx6q\" (UniqueName: \"kubernetes.io/projected/0fdaed7b-61f1-4840-88c7-f997a45a27ca-kube-api-access-wxx6q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.268032 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.268178 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.370526 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.370667 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.370717 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.370747 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.370789 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.370820 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxx6q\" (UniqueName: \"kubernetes.io/projected/0fdaed7b-61f1-4840-88c7-f997a45a27ca-kube-api-access-wxx6q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.370861 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.374979 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.375209 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.375404 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.375790 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.375983 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.378372 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.386772 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxx6q\" (UniqueName: \"kubernetes.io/projected/0fdaed7b-61f1-4840-88c7-f997a45a27ca-kube-api-access-wxx6q\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.391658 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.872590 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs"] Nov 26 14:07:53 crc kubenswrapper[4695]: I1126 14:07:53.986993 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" event={"ID":"0fdaed7b-61f1-4840-88c7-f997a45a27ca","Type":"ContainerStarted","Data":"b152796f4110ef9e6732780f5e120e627916cbc75a9e2b1d4c0d98d5f5fe14f1"} Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.003963 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" event={"ID":"0fdaed7b-61f1-4840-88c7-f997a45a27ca","Type":"ContainerStarted","Data":"bbfbb0801961b1c395f78d9767873d7832a91e4c5eab411c1812917809fc5525"} Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.024193 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" podStartSLOduration=2.125977827 podStartE2EDuration="3.02417415s" podCreationTimestamp="2025-11-26 14:07:53 +0000 UTC" firstStartedPulling="2025-11-26 14:07:53.878005679 +0000 UTC m=+2657.513830761" lastFinishedPulling="2025-11-26 14:07:54.776202002 +0000 UTC m=+2658.412027084" observedRunningTime="2025-11-26 14:07:56.018796839 +0000 UTC m=+2659.654621921" watchObservedRunningTime="2025-11-26 14:07:56.02417415 +0000 UTC m=+2659.659999232" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.492519 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nxc72"] Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.495262 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.512280 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxc72"] Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.633914 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-catalog-content\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.634066 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-utilities\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.634114 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8mx2\" (UniqueName: \"kubernetes.io/projected/f9833e3a-6099-45f5-8b9a-87260315183c-kube-api-access-w8mx2\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.736630 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8mx2\" (UniqueName: \"kubernetes.io/projected/f9833e3a-6099-45f5-8b9a-87260315183c-kube-api-access-w8mx2\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.736787 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-catalog-content\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.736934 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-utilities\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.737409 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-catalog-content\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.737537 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-utilities\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.760588 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8mx2\" (UniqueName: \"kubernetes.io/projected/f9833e3a-6099-45f5-8b9a-87260315183c-kube-api-access-w8mx2\") pod \"community-operators-nxc72\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:56 crc kubenswrapper[4695]: I1126 14:07:56.821307 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:07:57 crc kubenswrapper[4695]: I1126 14:07:57.358845 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nxc72"] Nov 26 14:07:57 crc kubenswrapper[4695]: W1126 14:07:57.361312 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9833e3a_6099_45f5_8b9a_87260315183c.slice/crio-7910585b5ed0b4b63b1924da907d46579e008be66c9c4fd6b076fc5ae0739fd8 WatchSource:0}: Error finding container 7910585b5ed0b4b63b1924da907d46579e008be66c9c4fd6b076fc5ae0739fd8: Status 404 returned error can't find the container with id 7910585b5ed0b4b63b1924da907d46579e008be66c9c4fd6b076fc5ae0739fd8 Nov 26 14:07:58 crc kubenswrapper[4695]: I1126 14:07:58.035542 4695 generic.go:334] "Generic (PLEG): container finished" podID="f9833e3a-6099-45f5-8b9a-87260315183c" containerID="c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23" exitCode=0 Nov 26 14:07:58 crc kubenswrapper[4695]: I1126 14:07:58.035652 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxc72" event={"ID":"f9833e3a-6099-45f5-8b9a-87260315183c","Type":"ContainerDied","Data":"c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23"} Nov 26 14:07:58 crc kubenswrapper[4695]: I1126 14:07:58.035895 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxc72" event={"ID":"f9833e3a-6099-45f5-8b9a-87260315183c","Type":"ContainerStarted","Data":"7910585b5ed0b4b63b1924da907d46579e008be66c9c4fd6b076fc5ae0739fd8"} Nov 26 14:08:00 crc kubenswrapper[4695]: I1126 14:08:00.054749 4695 generic.go:334] "Generic (PLEG): container finished" podID="f9833e3a-6099-45f5-8b9a-87260315183c" containerID="dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19" exitCode=0 Nov 26 14:08:00 crc kubenswrapper[4695]: I1126 14:08:00.054829 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxc72" event={"ID":"f9833e3a-6099-45f5-8b9a-87260315183c","Type":"ContainerDied","Data":"dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19"} Nov 26 14:08:01 crc kubenswrapper[4695]: I1126 14:08:01.066362 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxc72" event={"ID":"f9833e3a-6099-45f5-8b9a-87260315183c","Type":"ContainerStarted","Data":"3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8"} Nov 26 14:08:01 crc kubenswrapper[4695]: I1126 14:08:01.086583 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nxc72" podStartSLOduration=2.366061838 podStartE2EDuration="5.086563744s" podCreationTimestamp="2025-11-26 14:07:56 +0000 UTC" firstStartedPulling="2025-11-26 14:07:58.037010114 +0000 UTC m=+2661.672835196" lastFinishedPulling="2025-11-26 14:08:00.75751202 +0000 UTC m=+2664.393337102" observedRunningTime="2025-11-26 14:08:01.083286319 +0000 UTC m=+2664.719111401" watchObservedRunningTime="2025-11-26 14:08:01.086563744 +0000 UTC m=+2664.722388826" Nov 26 14:08:06 crc kubenswrapper[4695]: I1126 14:08:06.821973 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:08:06 crc kubenswrapper[4695]: I1126 14:08:06.822989 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:08:06 crc kubenswrapper[4695]: I1126 14:08:06.877326 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:08:07 crc kubenswrapper[4695]: I1126 14:08:07.158703 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:08:07 crc kubenswrapper[4695]: I1126 14:08:07.207026 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxc72"] Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.141078 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nxc72" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="registry-server" containerID="cri-o://3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8" gracePeriod=2 Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.634810 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.795508 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-catalog-content\") pod \"f9833e3a-6099-45f5-8b9a-87260315183c\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.795655 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-utilities\") pod \"f9833e3a-6099-45f5-8b9a-87260315183c\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.795818 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8mx2\" (UniqueName: \"kubernetes.io/projected/f9833e3a-6099-45f5-8b9a-87260315183c-kube-api-access-w8mx2\") pod \"f9833e3a-6099-45f5-8b9a-87260315183c\" (UID: \"f9833e3a-6099-45f5-8b9a-87260315183c\") " Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.796563 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-utilities" (OuterVolumeSpecName: "utilities") pod "f9833e3a-6099-45f5-8b9a-87260315183c" (UID: "f9833e3a-6099-45f5-8b9a-87260315183c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.813521 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9833e3a-6099-45f5-8b9a-87260315183c-kube-api-access-w8mx2" (OuterVolumeSpecName: "kube-api-access-w8mx2") pod "f9833e3a-6099-45f5-8b9a-87260315183c" (UID: "f9833e3a-6099-45f5-8b9a-87260315183c"). InnerVolumeSpecName "kube-api-access-w8mx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.850147 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9833e3a-6099-45f5-8b9a-87260315183c" (UID: "f9833e3a-6099-45f5-8b9a-87260315183c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.898143 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8mx2\" (UniqueName: \"kubernetes.io/projected/f9833e3a-6099-45f5-8b9a-87260315183c-kube-api-access-w8mx2\") on node \"crc\" DevicePath \"\"" Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.898204 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:08:09 crc kubenswrapper[4695]: I1126 14:08:09.898214 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9833e3a-6099-45f5-8b9a-87260315183c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.154770 4695 generic.go:334] "Generic (PLEG): container finished" podID="f9833e3a-6099-45f5-8b9a-87260315183c" containerID="3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8" exitCode=0 Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.154844 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nxc72" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.154845 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxc72" event={"ID":"f9833e3a-6099-45f5-8b9a-87260315183c","Type":"ContainerDied","Data":"3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8"} Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.154994 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nxc72" event={"ID":"f9833e3a-6099-45f5-8b9a-87260315183c","Type":"ContainerDied","Data":"7910585b5ed0b4b63b1924da907d46579e008be66c9c4fd6b076fc5ae0739fd8"} Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.155018 4695 scope.go:117] "RemoveContainer" containerID="3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.192998 4695 scope.go:117] "RemoveContainer" containerID="dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.193218 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nxc72"] Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.201183 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nxc72"] Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.235936 4695 scope.go:117] "RemoveContainer" containerID="c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.260989 4695 scope.go:117] "RemoveContainer" containerID="3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8" Nov 26 14:08:10 crc kubenswrapper[4695]: E1126 14:08:10.261510 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8\": container with ID starting with 3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8 not found: ID does not exist" containerID="3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.261549 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8"} err="failed to get container status \"3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8\": rpc error: code = NotFound desc = could not find container \"3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8\": container with ID starting with 3d1ff72e2bd3a8aff56ca9108fd04442747b4f43a228ba17e8a8b4ecc0e5bdc8 not found: ID does not exist" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.261574 4695 scope.go:117] "RemoveContainer" containerID="dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19" Nov 26 14:08:10 crc kubenswrapper[4695]: E1126 14:08:10.261885 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19\": container with ID starting with dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19 not found: ID does not exist" containerID="dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.261924 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19"} err="failed to get container status \"dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19\": rpc error: code = NotFound desc = could not find container \"dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19\": container with ID starting with dbab348f36105995ac956dd8fb9e8d3038ab6bacd449a352c3895feed8690c19 not found: ID does not exist" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.261946 4695 scope.go:117] "RemoveContainer" containerID="c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23" Nov 26 14:08:10 crc kubenswrapper[4695]: E1126 14:08:10.262205 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23\": container with ID starting with c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23 not found: ID does not exist" containerID="c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23" Nov 26 14:08:10 crc kubenswrapper[4695]: I1126 14:08:10.262225 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23"} err="failed to get container status \"c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23\": rpc error: code = NotFound desc = could not find container \"c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23\": container with ID starting with c9265fcf70d0aba681f508c5802f78d9ac7bee68964657b83cff4784597e4e23 not found: ID does not exist" Nov 26 14:08:11 crc kubenswrapper[4695]: I1126 14:08:11.172396 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" path="/var/lib/kubelet/pods/f9833e3a-6099-45f5-8b9a-87260315183c/volumes" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.485312 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-brsm8"] Nov 26 14:08:29 crc kubenswrapper[4695]: E1126 14:08:29.486631 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="extract-utilities" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.486651 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="extract-utilities" Nov 26 14:08:29 crc kubenswrapper[4695]: E1126 14:08:29.486670 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="extract-content" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.486677 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="extract-content" Nov 26 14:08:29 crc kubenswrapper[4695]: E1126 14:08:29.486695 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="registry-server" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.486706 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="registry-server" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.487051 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9833e3a-6099-45f5-8b9a-87260315183c" containerName="registry-server" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.488634 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.500811 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brsm8"] Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.585164 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-catalog-content\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.585228 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-utilities\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.585252 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhft\" (UniqueName: \"kubernetes.io/projected/466ac8fa-2238-4bd5-86db-96cc70de87e4-kube-api-access-xrhft\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.686873 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-catalog-content\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.686931 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-utilities\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.686958 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhft\" (UniqueName: \"kubernetes.io/projected/466ac8fa-2238-4bd5-86db-96cc70de87e4-kube-api-access-xrhft\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.687800 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-catalog-content\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.687892 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-utilities\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.720437 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhft\" (UniqueName: \"kubernetes.io/projected/466ac8fa-2238-4bd5-86db-96cc70de87e4-kube-api-access-xrhft\") pod \"certified-operators-brsm8\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:29 crc kubenswrapper[4695]: I1126 14:08:29.819030 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:30 crc kubenswrapper[4695]: I1126 14:08:30.369142 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-brsm8"] Nov 26 14:08:31 crc kubenswrapper[4695]: I1126 14:08:31.357955 4695 generic.go:334] "Generic (PLEG): container finished" podID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerID="1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87" exitCode=0 Nov 26 14:08:31 crc kubenswrapper[4695]: I1126 14:08:31.358004 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brsm8" event={"ID":"466ac8fa-2238-4bd5-86db-96cc70de87e4","Type":"ContainerDied","Data":"1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87"} Nov 26 14:08:31 crc kubenswrapper[4695]: I1126 14:08:31.358032 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brsm8" event={"ID":"466ac8fa-2238-4bd5-86db-96cc70de87e4","Type":"ContainerStarted","Data":"5ea48684baf869062bd4cd7e392728c8e48ef7bb421cc8808ddf2e57fc77eb5c"} Nov 26 14:08:32 crc kubenswrapper[4695]: I1126 14:08:32.368222 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brsm8" event={"ID":"466ac8fa-2238-4bd5-86db-96cc70de87e4","Type":"ContainerStarted","Data":"97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007"} Nov 26 14:08:33 crc kubenswrapper[4695]: I1126 14:08:33.382832 4695 generic.go:334] "Generic (PLEG): container finished" podID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerID="97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007" exitCode=0 Nov 26 14:08:33 crc kubenswrapper[4695]: I1126 14:08:33.382908 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brsm8" event={"ID":"466ac8fa-2238-4bd5-86db-96cc70de87e4","Type":"ContainerDied","Data":"97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007"} Nov 26 14:08:34 crc kubenswrapper[4695]: I1126 14:08:34.395902 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brsm8" event={"ID":"466ac8fa-2238-4bd5-86db-96cc70de87e4","Type":"ContainerStarted","Data":"a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b"} Nov 26 14:08:34 crc kubenswrapper[4695]: I1126 14:08:34.416222 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-brsm8" podStartSLOduration=2.817142906 podStartE2EDuration="5.416202784s" podCreationTimestamp="2025-11-26 14:08:29 +0000 UTC" firstStartedPulling="2025-11-26 14:08:31.35991757 +0000 UTC m=+2694.995742662" lastFinishedPulling="2025-11-26 14:08:33.958977438 +0000 UTC m=+2697.594802540" observedRunningTime="2025-11-26 14:08:34.413768837 +0000 UTC m=+2698.049593929" watchObservedRunningTime="2025-11-26 14:08:34.416202784 +0000 UTC m=+2698.052027876" Nov 26 14:08:39 crc kubenswrapper[4695]: I1126 14:08:39.820394 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:39 crc kubenswrapper[4695]: I1126 14:08:39.821077 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:39 crc kubenswrapper[4695]: I1126 14:08:39.875231 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:40 crc kubenswrapper[4695]: I1126 14:08:40.501245 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:40 crc kubenswrapper[4695]: I1126 14:08:40.563231 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brsm8"] Nov 26 14:08:42 crc kubenswrapper[4695]: I1126 14:08:42.472696 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-brsm8" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="registry-server" containerID="cri-o://a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b" gracePeriod=2 Nov 26 14:08:42 crc kubenswrapper[4695]: I1126 14:08:42.905237 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.062022 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrhft\" (UniqueName: \"kubernetes.io/projected/466ac8fa-2238-4bd5-86db-96cc70de87e4-kube-api-access-xrhft\") pod \"466ac8fa-2238-4bd5-86db-96cc70de87e4\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.062289 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-catalog-content\") pod \"466ac8fa-2238-4bd5-86db-96cc70de87e4\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.062333 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-utilities\") pod \"466ac8fa-2238-4bd5-86db-96cc70de87e4\" (UID: \"466ac8fa-2238-4bd5-86db-96cc70de87e4\") " Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.063385 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-utilities" (OuterVolumeSpecName: "utilities") pod "466ac8fa-2238-4bd5-86db-96cc70de87e4" (UID: "466ac8fa-2238-4bd5-86db-96cc70de87e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.073631 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/466ac8fa-2238-4bd5-86db-96cc70de87e4-kube-api-access-xrhft" (OuterVolumeSpecName: "kube-api-access-xrhft") pod "466ac8fa-2238-4bd5-86db-96cc70de87e4" (UID: "466ac8fa-2238-4bd5-86db-96cc70de87e4"). InnerVolumeSpecName "kube-api-access-xrhft". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.119583 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "466ac8fa-2238-4bd5-86db-96cc70de87e4" (UID: "466ac8fa-2238-4bd5-86db-96cc70de87e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.163949 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.163987 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/466ac8fa-2238-4bd5-86db-96cc70de87e4-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.164001 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrhft\" (UniqueName: \"kubernetes.io/projected/466ac8fa-2238-4bd5-86db-96cc70de87e4-kube-api-access-xrhft\") on node \"crc\" DevicePath \"\"" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.485822 4695 generic.go:334] "Generic (PLEG): container finished" podID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerID="a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b" exitCode=0 Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.485903 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brsm8" event={"ID":"466ac8fa-2238-4bd5-86db-96cc70de87e4","Type":"ContainerDied","Data":"a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b"} Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.485982 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-brsm8" event={"ID":"466ac8fa-2238-4bd5-86db-96cc70de87e4","Type":"ContainerDied","Data":"5ea48684baf869062bd4cd7e392728c8e48ef7bb421cc8808ddf2e57fc77eb5c"} Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.485904 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-brsm8" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.486023 4695 scope.go:117] "RemoveContainer" containerID="a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.518209 4695 scope.go:117] "RemoveContainer" containerID="97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.523058 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-brsm8"] Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.533563 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-brsm8"] Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.545967 4695 scope.go:117] "RemoveContainer" containerID="1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.599824 4695 scope.go:117] "RemoveContainer" containerID="a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b" Nov 26 14:08:43 crc kubenswrapper[4695]: E1126 14:08:43.600775 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b\": container with ID starting with a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b not found: ID does not exist" containerID="a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.600821 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b"} err="failed to get container status \"a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b\": rpc error: code = NotFound desc = could not find container \"a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b\": container with ID starting with a99456137851388c1cf31b8530c97de487545e72a5fc2baf00f23e17b4ab152b not found: ID does not exist" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.600848 4695 scope.go:117] "RemoveContainer" containerID="97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007" Nov 26 14:08:43 crc kubenswrapper[4695]: E1126 14:08:43.601413 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007\": container with ID starting with 97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007 not found: ID does not exist" containerID="97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.601474 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007"} err="failed to get container status \"97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007\": rpc error: code = NotFound desc = could not find container \"97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007\": container with ID starting with 97e85d2e1da745931365c160112d301b16ac55e20691ec2fb0e3a5fa50426007 not found: ID does not exist" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.601510 4695 scope.go:117] "RemoveContainer" containerID="1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87" Nov 26 14:08:43 crc kubenswrapper[4695]: E1126 14:08:43.601779 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87\": container with ID starting with 1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87 not found: ID does not exist" containerID="1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87" Nov 26 14:08:43 crc kubenswrapper[4695]: I1126 14:08:43.601817 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87"} err="failed to get container status \"1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87\": rpc error: code = NotFound desc = could not find container \"1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87\": container with ID starting with 1a7beb87a51a50957b3b9db01d28e86501ffe4f0a8214cb7a31a1c653a2fcd87 not found: ID does not exist" Nov 26 14:08:45 crc kubenswrapper[4695]: I1126 14:08:45.173478 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" path="/var/lib/kubelet/pods/466ac8fa-2238-4bd5-86db-96cc70de87e4/volumes" Nov 26 14:09:06 crc kubenswrapper[4695]: I1126 14:09:06.396281 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:09:06 crc kubenswrapper[4695]: I1126 14:09:06.396943 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:09:36 crc kubenswrapper[4695]: I1126 14:09:36.396842 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:09:36 crc kubenswrapper[4695]: I1126 14:09:36.397398 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:10:06 crc kubenswrapper[4695]: I1126 14:10:06.396442 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:10:06 crc kubenswrapper[4695]: I1126 14:10:06.397052 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:10:06 crc kubenswrapper[4695]: I1126 14:10:06.397106 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 14:10:06 crc kubenswrapper[4695]: I1126 14:10:06.397993 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a5bc004105bd9a616745ebfd24ff38bede76fc786d06034362fdca56f0ac5619"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 14:10:06 crc kubenswrapper[4695]: I1126 14:10:06.398079 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://a5bc004105bd9a616745ebfd24ff38bede76fc786d06034362fdca56f0ac5619" gracePeriod=600 Nov 26 14:10:07 crc kubenswrapper[4695]: I1126 14:10:07.285979 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="a5bc004105bd9a616745ebfd24ff38bede76fc786d06034362fdca56f0ac5619" exitCode=0 Nov 26 14:10:07 crc kubenswrapper[4695]: I1126 14:10:07.286388 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"a5bc004105bd9a616745ebfd24ff38bede76fc786d06034362fdca56f0ac5619"} Nov 26 14:10:07 crc kubenswrapper[4695]: I1126 14:10:07.286424 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7"} Nov 26 14:10:07 crc kubenswrapper[4695]: I1126 14:10:07.286445 4695 scope.go:117] "RemoveContainer" containerID="c060daddb66ba1bfdc57269515fbacabe73d44ffaba6834de028fc514959b993" Nov 26 14:10:14 crc kubenswrapper[4695]: I1126 14:10:14.357161 4695 generic.go:334] "Generic (PLEG): container finished" podID="0fdaed7b-61f1-4840-88c7-f997a45a27ca" containerID="bbfbb0801961b1c395f78d9767873d7832a91e4c5eab411c1812917809fc5525" exitCode=0 Nov 26 14:10:14 crc kubenswrapper[4695]: I1126 14:10:14.357227 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" event={"ID":"0fdaed7b-61f1-4840-88c7-f997a45a27ca","Type":"ContainerDied","Data":"bbfbb0801961b1c395f78d9767873d7832a91e4c5eab411c1812917809fc5525"} Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.817624 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.932911 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-2\") pod \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.932951 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ssh-key\") pod \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.933071 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-0\") pod \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.933185 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-1\") pod \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.933214 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-inventory\") pod \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.933233 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxx6q\" (UniqueName: \"kubernetes.io/projected/0fdaed7b-61f1-4840-88c7-f997a45a27ca-kube-api-access-wxx6q\") pod \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.933266 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-telemetry-combined-ca-bundle\") pod \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\" (UID: \"0fdaed7b-61f1-4840-88c7-f997a45a27ca\") " Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.944540 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "0fdaed7b-61f1-4840-88c7-f997a45a27ca" (UID: "0fdaed7b-61f1-4840-88c7-f997a45a27ca"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.944540 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdaed7b-61f1-4840-88c7-f997a45a27ca-kube-api-access-wxx6q" (OuterVolumeSpecName: "kube-api-access-wxx6q") pod "0fdaed7b-61f1-4840-88c7-f997a45a27ca" (UID: "0fdaed7b-61f1-4840-88c7-f997a45a27ca"). InnerVolumeSpecName "kube-api-access-wxx6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.961769 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "0fdaed7b-61f1-4840-88c7-f997a45a27ca" (UID: "0fdaed7b-61f1-4840-88c7-f997a45a27ca"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.962697 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-inventory" (OuterVolumeSpecName: "inventory") pod "0fdaed7b-61f1-4840-88c7-f997a45a27ca" (UID: "0fdaed7b-61f1-4840-88c7-f997a45a27ca"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.964081 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "0fdaed7b-61f1-4840-88c7-f997a45a27ca" (UID: "0fdaed7b-61f1-4840-88c7-f997a45a27ca"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.966463 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "0fdaed7b-61f1-4840-88c7-f997a45a27ca" (UID: "0fdaed7b-61f1-4840-88c7-f997a45a27ca"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:10:15 crc kubenswrapper[4695]: I1126 14:10:15.988505 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0fdaed7b-61f1-4840-88c7-f997a45a27ca" (UID: "0fdaed7b-61f1-4840-88c7-f997a45a27ca"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.036053 4695 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.036082 4695 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.036106 4695 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-inventory\") on node \"crc\" DevicePath \"\"" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.036117 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxx6q\" (UniqueName: \"kubernetes.io/projected/0fdaed7b-61f1-4840-88c7-f997a45a27ca-kube-api-access-wxx6q\") on node \"crc\" DevicePath \"\"" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.036127 4695 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.036137 4695 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.036144 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0fdaed7b-61f1-4840-88c7-f997a45a27ca-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.379681 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" event={"ID":"0fdaed7b-61f1-4840-88c7-f997a45a27ca","Type":"ContainerDied","Data":"b152796f4110ef9e6732780f5e120e627916cbc75a9e2b1d4c0d98d5f5fe14f1"} Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.379722 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b152796f4110ef9e6732780f5e120e627916cbc75a9e2b1d4c0d98d5f5fe14f1" Nov 26 14:10:16 crc kubenswrapper[4695]: I1126 14:10:16.379788 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.923818 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Nov 26 14:10:57 crc kubenswrapper[4695]: E1126 14:10:57.925981 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="extract-utilities" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.926013 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="extract-utilities" Nov 26 14:10:57 crc kubenswrapper[4695]: E1126 14:10:57.926042 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdaed7b-61f1-4840-88c7-f997a45a27ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.926054 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdaed7b-61f1-4840-88c7-f997a45a27ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 26 14:10:57 crc kubenswrapper[4695]: E1126 14:10:57.926069 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="extract-content" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.926075 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="extract-content" Nov 26 14:10:57 crc kubenswrapper[4695]: E1126 14:10:57.926098 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="registry-server" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.926104 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="registry-server" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.926271 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="466ac8fa-2238-4bd5-86db-96cc70de87e4" containerName="registry-server" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.926283 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdaed7b-61f1-4840-88c7-f997a45a27ca" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.926982 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.928839 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.936740 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.936957 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.937268 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nwrms" Nov 26 14:10:57 crc kubenswrapper[4695]: I1126 14:10:57.960534 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.042569 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.042743 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.042826 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.042886 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.042944 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.042996 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.043042 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q26c\" (UniqueName: \"kubernetes.io/projected/d7930b08-66ca-496a-94a1-b68e2fe60177-kube-api-access-7q26c\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.043095 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.043154 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-config-data\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144494 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144554 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144580 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144618 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144661 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q26c\" (UniqueName: \"kubernetes.io/projected/d7930b08-66ca-496a-94a1-b68e2fe60177-kube-api-access-7q26c\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144703 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144725 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-config-data\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144810 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.144893 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.145459 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.145498 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.145738 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.145810 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.145991 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-config-data\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.151009 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.151032 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.151456 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.164511 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q26c\" (UniqueName: \"kubernetes.io/projected/d7930b08-66ca-496a-94a1-b68e2fe60177-kube-api-access-7q26c\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.184201 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.270424 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.723235 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.730614 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 14:10:58 crc kubenswrapper[4695]: I1126 14:10:58.765310 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d7930b08-66ca-496a-94a1-b68e2fe60177","Type":"ContainerStarted","Data":"2aff1dbd2b99b969d915aa33da068a2457d5f9c86e60bf9c020401c675fb1d85"} Nov 26 14:11:43 crc kubenswrapper[4695]: E1126 14:11:43.859124 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Nov 26 14:11:43 crc kubenswrapper[4695]: E1126 14:11:43.859918 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7q26c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(d7930b08-66ca-496a-94a1-b68e2fe60177): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 14:11:43 crc kubenswrapper[4695]: E1126 14:11:43.861148 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="d7930b08-66ca-496a-94a1-b68e2fe60177" Nov 26 14:11:44 crc kubenswrapper[4695]: E1126 14:11:44.220640 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="d7930b08-66ca-496a-94a1-b68e2fe60177" Nov 26 14:11:58 crc kubenswrapper[4695]: I1126 14:11:58.765880 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Nov 26 14:12:00 crc kubenswrapper[4695]: I1126 14:12:00.377749 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d7930b08-66ca-496a-94a1-b68e2fe60177","Type":"ContainerStarted","Data":"abe370053fe34b2219192d0e56898f50da89a0c8abe3d97751edbff45686f53a"} Nov 26 14:12:00 crc kubenswrapper[4695]: I1126 14:12:00.398573 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.366172855 podStartE2EDuration="1m4.398551786s" podCreationTimestamp="2025-11-26 14:10:56 +0000 UTC" firstStartedPulling="2025-11-26 14:10:58.730393205 +0000 UTC m=+2842.366218297" lastFinishedPulling="2025-11-26 14:11:58.762772146 +0000 UTC m=+2902.398597228" observedRunningTime="2025-11-26 14:12:00.394837447 +0000 UTC m=+2904.030662529" watchObservedRunningTime="2025-11-26 14:12:00.398551786 +0000 UTC m=+2904.034376868" Nov 26 14:12:36 crc kubenswrapper[4695]: I1126 14:12:36.396850 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:12:36 crc kubenswrapper[4695]: I1126 14:12:36.397464 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:13:06 crc kubenswrapper[4695]: I1126 14:13:06.396541 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:13:06 crc kubenswrapper[4695]: I1126 14:13:06.397083 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:13:36 crc kubenswrapper[4695]: I1126 14:13:36.397020 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:13:36 crc kubenswrapper[4695]: I1126 14:13:36.398027 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:13:36 crc kubenswrapper[4695]: I1126 14:13:36.398092 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 14:13:36 crc kubenswrapper[4695]: I1126 14:13:36.399041 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 14:13:36 crc kubenswrapper[4695]: I1126 14:13:36.399129 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" gracePeriod=600 Nov 26 14:13:36 crc kubenswrapper[4695]: E1126 14:13:36.542862 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:13:37 crc kubenswrapper[4695]: I1126 14:13:37.348647 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" exitCode=0 Nov 26 14:13:37 crc kubenswrapper[4695]: I1126 14:13:37.348713 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7"} Nov 26 14:13:37 crc kubenswrapper[4695]: I1126 14:13:37.349481 4695 scope.go:117] "RemoveContainer" containerID="a5bc004105bd9a616745ebfd24ff38bede76fc786d06034362fdca56f0ac5619" Nov 26 14:13:37 crc kubenswrapper[4695]: I1126 14:13:37.351037 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:13:37 crc kubenswrapper[4695]: E1126 14:13:37.351850 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:13:51 crc kubenswrapper[4695]: I1126 14:13:51.162903 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:13:51 crc kubenswrapper[4695]: E1126 14:13:51.164282 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:14:02 crc kubenswrapper[4695]: I1126 14:14:02.162873 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:14:02 crc kubenswrapper[4695]: E1126 14:14:02.163600 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:14:17 crc kubenswrapper[4695]: I1126 14:14:17.169113 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:14:17 crc kubenswrapper[4695]: E1126 14:14:17.169976 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:14:32 crc kubenswrapper[4695]: I1126 14:14:32.162264 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:14:32 crc kubenswrapper[4695]: E1126 14:14:32.163410 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:14:46 crc kubenswrapper[4695]: I1126 14:14:46.162748 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:14:46 crc kubenswrapper[4695]: E1126 14:14:46.163494 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.169463 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv"] Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.169607 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:15:00 crc kubenswrapper[4695]: E1126 14:15:00.171383 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.173202 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.178951 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.179403 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.192631 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv"] Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.301584 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64276651-b615-41de-a9c5-915f75e6ff16-secret-volume\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.301737 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64276651-b615-41de-a9c5-915f75e6ff16-config-volume\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.302200 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb9dt\" (UniqueName: \"kubernetes.io/projected/64276651-b615-41de-a9c5-915f75e6ff16-kube-api-access-wb9dt\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.404248 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb9dt\" (UniqueName: \"kubernetes.io/projected/64276651-b615-41de-a9c5-915f75e6ff16-kube-api-access-wb9dt\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.404744 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64276651-b615-41de-a9c5-915f75e6ff16-secret-volume\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.404969 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64276651-b615-41de-a9c5-915f75e6ff16-config-volume\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.405762 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64276651-b615-41de-a9c5-915f75e6ff16-config-volume\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.416926 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64276651-b615-41de-a9c5-915f75e6ff16-secret-volume\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.419226 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb9dt\" (UniqueName: \"kubernetes.io/projected/64276651-b615-41de-a9c5-915f75e6ff16-kube-api-access-wb9dt\") pod \"collect-profiles-29402775-n7fxv\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.509817 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:00 crc kubenswrapper[4695]: I1126 14:15:00.947370 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv"] Nov 26 14:15:01 crc kubenswrapper[4695]: I1126 14:15:01.082290 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" event={"ID":"64276651-b615-41de-a9c5-915f75e6ff16","Type":"ContainerStarted","Data":"91c486ef12d645d1ff29d39e2dbcb3ff694e26cb350a408a2fcdfcba47876038"} Nov 26 14:15:02 crc kubenswrapper[4695]: I1126 14:15:02.090980 4695 generic.go:334] "Generic (PLEG): container finished" podID="64276651-b615-41de-a9c5-915f75e6ff16" containerID="88a6e834f883341f8da1b7d6a3ddb498954cc4fbeef329484dd2ded0163a694b" exitCode=0 Nov 26 14:15:02 crc kubenswrapper[4695]: I1126 14:15:02.091035 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" event={"ID":"64276651-b615-41de-a9c5-915f75e6ff16","Type":"ContainerDied","Data":"88a6e834f883341f8da1b7d6a3ddb498954cc4fbeef329484dd2ded0163a694b"} Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.490075 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.572808 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64276651-b615-41de-a9c5-915f75e6ff16-secret-volume\") pod \"64276651-b615-41de-a9c5-915f75e6ff16\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.573217 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64276651-b615-41de-a9c5-915f75e6ff16-config-volume\") pod \"64276651-b615-41de-a9c5-915f75e6ff16\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.574153 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64276651-b615-41de-a9c5-915f75e6ff16-config-volume" (OuterVolumeSpecName: "config-volume") pod "64276651-b615-41de-a9c5-915f75e6ff16" (UID: "64276651-b615-41de-a9c5-915f75e6ff16"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.574323 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wb9dt\" (UniqueName: \"kubernetes.io/projected/64276651-b615-41de-a9c5-915f75e6ff16-kube-api-access-wb9dt\") pod \"64276651-b615-41de-a9c5-915f75e6ff16\" (UID: \"64276651-b615-41de-a9c5-915f75e6ff16\") " Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.575101 4695 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64276651-b615-41de-a9c5-915f75e6ff16-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.579712 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64276651-b615-41de-a9c5-915f75e6ff16-kube-api-access-wb9dt" (OuterVolumeSpecName: "kube-api-access-wb9dt") pod "64276651-b615-41de-a9c5-915f75e6ff16" (UID: "64276651-b615-41de-a9c5-915f75e6ff16"). InnerVolumeSpecName "kube-api-access-wb9dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.579824 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64276651-b615-41de-a9c5-915f75e6ff16-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "64276651-b615-41de-a9c5-915f75e6ff16" (UID: "64276651-b615-41de-a9c5-915f75e6ff16"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.677968 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wb9dt\" (UniqueName: \"kubernetes.io/projected/64276651-b615-41de-a9c5-915f75e6ff16-kube-api-access-wb9dt\") on node \"crc\" DevicePath \"\"" Nov 26 14:15:03 crc kubenswrapper[4695]: I1126 14:15:03.678020 4695 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/64276651-b615-41de-a9c5-915f75e6ff16-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 14:15:04 crc kubenswrapper[4695]: I1126 14:15:04.113982 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" event={"ID":"64276651-b615-41de-a9c5-915f75e6ff16","Type":"ContainerDied","Data":"91c486ef12d645d1ff29d39e2dbcb3ff694e26cb350a408a2fcdfcba47876038"} Nov 26 14:15:04 crc kubenswrapper[4695]: I1126 14:15:04.114363 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91c486ef12d645d1ff29d39e2dbcb3ff694e26cb350a408a2fcdfcba47876038" Nov 26 14:15:04 crc kubenswrapper[4695]: I1126 14:15:04.114029 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402775-n7fxv" Nov 26 14:15:04 crc kubenswrapper[4695]: I1126 14:15:04.569851 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx"] Nov 26 14:15:04 crc kubenswrapper[4695]: I1126 14:15:04.577460 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402730-jx7mx"] Nov 26 14:15:05 crc kubenswrapper[4695]: I1126 14:15:05.177949 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="739d1e68-f65b-49d5-8ad6-4feff696f45a" path="/var/lib/kubelet/pods/739d1e68-f65b-49d5-8ad6-4feff696f45a/volumes" Nov 26 14:15:08 crc kubenswrapper[4695]: I1126 14:15:08.301302 4695 scope.go:117] "RemoveContainer" containerID="95aebe13a05e482934847e56eeac7a4c52acb95bb55428c18d105609644d8d5a" Nov 26 14:15:12 crc kubenswrapper[4695]: I1126 14:15:12.163237 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:15:12 crc kubenswrapper[4695]: E1126 14:15:12.163736 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:15:23 crc kubenswrapper[4695]: I1126 14:15:23.162417 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:15:23 crc kubenswrapper[4695]: E1126 14:15:23.163306 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:15:37 crc kubenswrapper[4695]: I1126 14:15:37.169630 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:15:37 crc kubenswrapper[4695]: E1126 14:15:37.170570 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:15:52 crc kubenswrapper[4695]: I1126 14:15:52.163163 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:15:52 crc kubenswrapper[4695]: E1126 14:15:52.163903 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:16:05 crc kubenswrapper[4695]: I1126 14:16:05.162415 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:16:05 crc kubenswrapper[4695]: E1126 14:16:05.163189 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:16:17 crc kubenswrapper[4695]: I1126 14:16:17.170069 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:16:17 crc kubenswrapper[4695]: E1126 14:16:17.170977 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:16:28 crc kubenswrapper[4695]: I1126 14:16:28.162802 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:16:28 crc kubenswrapper[4695]: E1126 14:16:28.163971 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.930145 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rw7xt"] Nov 26 14:16:41 crc kubenswrapper[4695]: E1126 14:16:41.931174 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64276651-b615-41de-a9c5-915f75e6ff16" containerName="collect-profiles" Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.931188 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="64276651-b615-41de-a9c5-915f75e6ff16" containerName="collect-profiles" Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.931507 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="64276651-b615-41de-a9c5-915f75e6ff16" containerName="collect-profiles" Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.932968 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.945931 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rw7xt"] Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.991598 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5ffr\" (UniqueName: \"kubernetes.io/projected/56d7de4c-34d5-454b-bc24-607775059157-kube-api-access-t5ffr\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.991708 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-utilities\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:41 crc kubenswrapper[4695]: I1126 14:16:41.991746 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-catalog-content\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.093136 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5ffr\" (UniqueName: \"kubernetes.io/projected/56d7de4c-34d5-454b-bc24-607775059157-kube-api-access-t5ffr\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.093187 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-utilities\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.093224 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-catalog-content\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.093831 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-utilities\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.093842 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-catalog-content\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.111162 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5ffr\" (UniqueName: \"kubernetes.io/projected/56d7de4c-34d5-454b-bc24-607775059157-kube-api-access-t5ffr\") pod \"redhat-operators-rw7xt\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.252290 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:42 crc kubenswrapper[4695]: I1126 14:16:42.760820 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rw7xt"] Nov 26 14:16:43 crc kubenswrapper[4695]: I1126 14:16:43.057875 4695 generic.go:334] "Generic (PLEG): container finished" podID="56d7de4c-34d5-454b-bc24-607775059157" containerID="ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca" exitCode=0 Nov 26 14:16:43 crc kubenswrapper[4695]: I1126 14:16:43.058169 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rw7xt" event={"ID":"56d7de4c-34d5-454b-bc24-607775059157","Type":"ContainerDied","Data":"ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca"} Nov 26 14:16:43 crc kubenswrapper[4695]: I1126 14:16:43.058195 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rw7xt" event={"ID":"56d7de4c-34d5-454b-bc24-607775059157","Type":"ContainerStarted","Data":"e306e86ca396639b361a7f9a6db03fc40abc36ce606e9519bb14e5f6c4d6d10b"} Nov 26 14:16:43 crc kubenswrapper[4695]: I1126 14:16:43.060327 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 14:16:43 crc kubenswrapper[4695]: I1126 14:16:43.162042 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:16:43 crc kubenswrapper[4695]: E1126 14:16:43.162311 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:16:44 crc kubenswrapper[4695]: I1126 14:16:44.091674 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rw7xt" event={"ID":"56d7de4c-34d5-454b-bc24-607775059157","Type":"ContainerStarted","Data":"27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45"} Nov 26 14:16:45 crc kubenswrapper[4695]: I1126 14:16:45.103693 4695 generic.go:334] "Generic (PLEG): container finished" podID="56d7de4c-34d5-454b-bc24-607775059157" containerID="27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45" exitCode=0 Nov 26 14:16:45 crc kubenswrapper[4695]: I1126 14:16:45.103733 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rw7xt" event={"ID":"56d7de4c-34d5-454b-bc24-607775059157","Type":"ContainerDied","Data":"27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45"} Nov 26 14:16:46 crc kubenswrapper[4695]: I1126 14:16:46.114621 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rw7xt" event={"ID":"56d7de4c-34d5-454b-bc24-607775059157","Type":"ContainerStarted","Data":"99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a"} Nov 26 14:16:46 crc kubenswrapper[4695]: I1126 14:16:46.138108 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rw7xt" podStartSLOduration=2.599868785 podStartE2EDuration="5.138082462s" podCreationTimestamp="2025-11-26 14:16:41 +0000 UTC" firstStartedPulling="2025-11-26 14:16:43.060003238 +0000 UTC m=+3186.695828320" lastFinishedPulling="2025-11-26 14:16:45.598216915 +0000 UTC m=+3189.234041997" observedRunningTime="2025-11-26 14:16:46.13175361 +0000 UTC m=+3189.767578722" watchObservedRunningTime="2025-11-26 14:16:46.138082462 +0000 UTC m=+3189.773907544" Nov 26 14:16:52 crc kubenswrapper[4695]: I1126 14:16:52.252775 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:52 crc kubenswrapper[4695]: I1126 14:16:52.253419 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:52 crc kubenswrapper[4695]: I1126 14:16:52.298936 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:53 crc kubenswrapper[4695]: I1126 14:16:53.230258 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:53 crc kubenswrapper[4695]: I1126 14:16:53.272680 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rw7xt"] Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.207055 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rw7xt" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="registry-server" containerID="cri-o://99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a" gracePeriod=2 Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.712590 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.868550 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5ffr\" (UniqueName: \"kubernetes.io/projected/56d7de4c-34d5-454b-bc24-607775059157-kube-api-access-t5ffr\") pod \"56d7de4c-34d5-454b-bc24-607775059157\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.869415 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-catalog-content\") pod \"56d7de4c-34d5-454b-bc24-607775059157\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.869468 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-utilities\") pod \"56d7de4c-34d5-454b-bc24-607775059157\" (UID: \"56d7de4c-34d5-454b-bc24-607775059157\") " Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.870564 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-utilities" (OuterVolumeSpecName: "utilities") pod "56d7de4c-34d5-454b-bc24-607775059157" (UID: "56d7de4c-34d5-454b-bc24-607775059157"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.876729 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56d7de4c-34d5-454b-bc24-607775059157-kube-api-access-t5ffr" (OuterVolumeSpecName: "kube-api-access-t5ffr") pod "56d7de4c-34d5-454b-bc24-607775059157" (UID: "56d7de4c-34d5-454b-bc24-607775059157"). InnerVolumeSpecName "kube-api-access-t5ffr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.964634 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56d7de4c-34d5-454b-bc24-607775059157" (UID: "56d7de4c-34d5-454b-bc24-607775059157"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.996775 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5ffr\" (UniqueName: \"kubernetes.io/projected/56d7de4c-34d5-454b-bc24-607775059157-kube-api-access-t5ffr\") on node \"crc\" DevicePath \"\"" Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.996816 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:16:55 crc kubenswrapper[4695]: I1126 14:16:55.996829 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56d7de4c-34d5-454b-bc24-607775059157-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.162610 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:16:56 crc kubenswrapper[4695]: E1126 14:16:56.163051 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.218290 4695 generic.go:334] "Generic (PLEG): container finished" podID="56d7de4c-34d5-454b-bc24-607775059157" containerID="99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a" exitCode=0 Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.218369 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rw7xt" event={"ID":"56d7de4c-34d5-454b-bc24-607775059157","Type":"ContainerDied","Data":"99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a"} Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.218415 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rw7xt" event={"ID":"56d7de4c-34d5-454b-bc24-607775059157","Type":"ContainerDied","Data":"e306e86ca396639b361a7f9a6db03fc40abc36ce606e9519bb14e5f6c4d6d10b"} Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.218445 4695 scope.go:117] "RemoveContainer" containerID="99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.218533 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rw7xt" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.237420 4695 scope.go:117] "RemoveContainer" containerID="27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.253218 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rw7xt"] Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.265281 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rw7xt"] Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.284659 4695 scope.go:117] "RemoveContainer" containerID="ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.316662 4695 scope.go:117] "RemoveContainer" containerID="99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a" Nov 26 14:16:56 crc kubenswrapper[4695]: E1126 14:16:56.317364 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a\": container with ID starting with 99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a not found: ID does not exist" containerID="99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.317438 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a"} err="failed to get container status \"99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a\": rpc error: code = NotFound desc = could not find container \"99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a\": container with ID starting with 99fa61bdca1e3c3ec18641c0d9a28b74f792a99a72053cd03f328f2c3e2c2b1a not found: ID does not exist" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.317478 4695 scope.go:117] "RemoveContainer" containerID="27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45" Nov 26 14:16:56 crc kubenswrapper[4695]: E1126 14:16:56.317861 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45\": container with ID starting with 27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45 not found: ID does not exist" containerID="27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.317906 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45"} err="failed to get container status \"27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45\": rpc error: code = NotFound desc = could not find container \"27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45\": container with ID starting with 27c9f44757e6e500934a712c35d3845ab5ccd83bed8f1aec393ff41f9cc87e45 not found: ID does not exist" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.317949 4695 scope.go:117] "RemoveContainer" containerID="ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca" Nov 26 14:16:56 crc kubenswrapper[4695]: E1126 14:16:56.318380 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca\": container with ID starting with ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca not found: ID does not exist" containerID="ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca" Nov 26 14:16:56 crc kubenswrapper[4695]: I1126 14:16:56.318414 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca"} err="failed to get container status \"ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca\": rpc error: code = NotFound desc = could not find container \"ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca\": container with ID starting with ae8636b54558416b8b7fd82b5a3e091659280d499508138b6e9dc191c087e2ca not found: ID does not exist" Nov 26 14:16:57 crc kubenswrapper[4695]: I1126 14:16:57.175696 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56d7de4c-34d5-454b-bc24-607775059157" path="/var/lib/kubelet/pods/56d7de4c-34d5-454b-bc24-607775059157/volumes" Nov 26 14:17:07 crc kubenswrapper[4695]: I1126 14:17:07.174294 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:17:07 crc kubenswrapper[4695]: E1126 14:17:07.175388 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:17:18 crc kubenswrapper[4695]: I1126 14:17:18.162622 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:17:18 crc kubenswrapper[4695]: E1126 14:17:18.163292 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:17:31 crc kubenswrapper[4695]: I1126 14:17:31.163455 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:17:31 crc kubenswrapper[4695]: E1126 14:17:31.164616 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.452374 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dct4"] Nov 26 14:17:37 crc kubenswrapper[4695]: E1126 14:17:37.453442 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="registry-server" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.453459 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="registry-server" Nov 26 14:17:37 crc kubenswrapper[4695]: E1126 14:17:37.453476 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="extract-content" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.453482 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="extract-content" Nov 26 14:17:37 crc kubenswrapper[4695]: E1126 14:17:37.453520 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="extract-utilities" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.453527 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="extract-utilities" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.453721 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="56d7de4c-34d5-454b-bc24-607775059157" containerName="registry-server" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.455285 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.464799 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dct4"] Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.565427 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-utilities\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.565770 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-catalog-content\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.565860 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffvl\" (UniqueName: \"kubernetes.io/projected/e4bdd66b-e095-4d81-b47e-b38fca2c3127-kube-api-access-6ffvl\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.667771 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-utilities\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.668068 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-catalog-content\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.668184 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffvl\" (UniqueName: \"kubernetes.io/projected/e4bdd66b-e095-4d81-b47e-b38fca2c3127-kube-api-access-6ffvl\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.668401 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-utilities\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.668603 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-catalog-content\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.694383 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffvl\" (UniqueName: \"kubernetes.io/projected/e4bdd66b-e095-4d81-b47e-b38fca2c3127-kube-api-access-6ffvl\") pod \"redhat-marketplace-6dct4\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:37 crc kubenswrapper[4695]: I1126 14:17:37.786556 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:38 crc kubenswrapper[4695]: I1126 14:17:38.222176 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dct4"] Nov 26 14:17:38 crc kubenswrapper[4695]: I1126 14:17:38.595863 4695 generic.go:334] "Generic (PLEG): container finished" podID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerID="af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541" exitCode=0 Nov 26 14:17:38 crc kubenswrapper[4695]: I1126 14:17:38.595958 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dct4" event={"ID":"e4bdd66b-e095-4d81-b47e-b38fca2c3127","Type":"ContainerDied","Data":"af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541"} Nov 26 14:17:38 crc kubenswrapper[4695]: I1126 14:17:38.596203 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dct4" event={"ID":"e4bdd66b-e095-4d81-b47e-b38fca2c3127","Type":"ContainerStarted","Data":"0c51f71f274e19e9c29cbc59d9ed277652e69d183654665c1a5cb2ffc8ed4a60"} Nov 26 14:17:39 crc kubenswrapper[4695]: I1126 14:17:39.605676 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dct4" event={"ID":"e4bdd66b-e095-4d81-b47e-b38fca2c3127","Type":"ContainerStarted","Data":"1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898"} Nov 26 14:17:40 crc kubenswrapper[4695]: I1126 14:17:40.615905 4695 generic.go:334] "Generic (PLEG): container finished" podID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerID="1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898" exitCode=0 Nov 26 14:17:40 crc kubenswrapper[4695]: I1126 14:17:40.615954 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dct4" event={"ID":"e4bdd66b-e095-4d81-b47e-b38fca2c3127","Type":"ContainerDied","Data":"1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898"} Nov 26 14:17:41 crc kubenswrapper[4695]: I1126 14:17:41.627626 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dct4" event={"ID":"e4bdd66b-e095-4d81-b47e-b38fca2c3127","Type":"ContainerStarted","Data":"eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2"} Nov 26 14:17:41 crc kubenswrapper[4695]: I1126 14:17:41.653904 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dct4" podStartSLOduration=2.033204412 podStartE2EDuration="4.653887645s" podCreationTimestamp="2025-11-26 14:17:37 +0000 UTC" firstStartedPulling="2025-11-26 14:17:38.59786672 +0000 UTC m=+3242.233691802" lastFinishedPulling="2025-11-26 14:17:41.218549953 +0000 UTC m=+3244.854375035" observedRunningTime="2025-11-26 14:17:41.650856068 +0000 UTC m=+3245.286681150" watchObservedRunningTime="2025-11-26 14:17:41.653887645 +0000 UTC m=+3245.289712727" Nov 26 14:17:46 crc kubenswrapper[4695]: I1126 14:17:46.161982 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:17:46 crc kubenswrapper[4695]: E1126 14:17:46.162741 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:17:47 crc kubenswrapper[4695]: I1126 14:17:47.788629 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:47 crc kubenswrapper[4695]: I1126 14:17:47.789001 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:47 crc kubenswrapper[4695]: I1126 14:17:47.855648 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:48 crc kubenswrapper[4695]: I1126 14:17:48.735494 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:48 crc kubenswrapper[4695]: I1126 14:17:48.785327 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dct4"] Nov 26 14:17:50 crc kubenswrapper[4695]: I1126 14:17:50.701881 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dct4" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="registry-server" containerID="cri-o://eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2" gracePeriod=2 Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.273396 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.441692 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-utilities\") pod \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.442003 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ffvl\" (UniqueName: \"kubernetes.io/projected/e4bdd66b-e095-4d81-b47e-b38fca2c3127-kube-api-access-6ffvl\") pod \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.442271 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-catalog-content\") pod \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\" (UID: \"e4bdd66b-e095-4d81-b47e-b38fca2c3127\") " Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.443505 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-utilities" (OuterVolumeSpecName: "utilities") pod "e4bdd66b-e095-4d81-b47e-b38fca2c3127" (UID: "e4bdd66b-e095-4d81-b47e-b38fca2c3127"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.450785 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4bdd66b-e095-4d81-b47e-b38fca2c3127-kube-api-access-6ffvl" (OuterVolumeSpecName: "kube-api-access-6ffvl") pod "e4bdd66b-e095-4d81-b47e-b38fca2c3127" (UID: "e4bdd66b-e095-4d81-b47e-b38fca2c3127"). InnerVolumeSpecName "kube-api-access-6ffvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.459379 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4bdd66b-e095-4d81-b47e-b38fca2c3127" (UID: "e4bdd66b-e095-4d81-b47e-b38fca2c3127"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.544879 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.544927 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4bdd66b-e095-4d81-b47e-b38fca2c3127-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.544941 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ffvl\" (UniqueName: \"kubernetes.io/projected/e4bdd66b-e095-4d81-b47e-b38fca2c3127-kube-api-access-6ffvl\") on node \"crc\" DevicePath \"\"" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.731313 4695 generic.go:334] "Generic (PLEG): container finished" podID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerID="eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2" exitCode=0 Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.731403 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dct4" event={"ID":"e4bdd66b-e095-4d81-b47e-b38fca2c3127","Type":"ContainerDied","Data":"eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2"} Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.731445 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dct4" event={"ID":"e4bdd66b-e095-4d81-b47e-b38fca2c3127","Type":"ContainerDied","Data":"0c51f71f274e19e9c29cbc59d9ed277652e69d183654665c1a5cb2ffc8ed4a60"} Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.731574 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dct4" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.732108 4695 scope.go:117] "RemoveContainer" containerID="eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.760067 4695 scope.go:117] "RemoveContainer" containerID="1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.781442 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dct4"] Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.787683 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dct4"] Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.797008 4695 scope.go:117] "RemoveContainer" containerID="af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.841108 4695 scope.go:117] "RemoveContainer" containerID="eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2" Nov 26 14:17:51 crc kubenswrapper[4695]: E1126 14:17:51.841618 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2\": container with ID starting with eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2 not found: ID does not exist" containerID="eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.841665 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2"} err="failed to get container status \"eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2\": rpc error: code = NotFound desc = could not find container \"eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2\": container with ID starting with eedcae8fd9404fff33113d5052e5e2d55d37d4ebe91e8e5208592aa8c4e346c2 not found: ID does not exist" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.841694 4695 scope.go:117] "RemoveContainer" containerID="1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898" Nov 26 14:17:51 crc kubenswrapper[4695]: E1126 14:17:51.842096 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898\": container with ID starting with 1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898 not found: ID does not exist" containerID="1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.842143 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898"} err="failed to get container status \"1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898\": rpc error: code = NotFound desc = could not find container \"1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898\": container with ID starting with 1d2725016b8ea132ab9224b3028e65750bdf9521196990c852569f4effd23898 not found: ID does not exist" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.842178 4695 scope.go:117] "RemoveContainer" containerID="af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541" Nov 26 14:17:51 crc kubenswrapper[4695]: E1126 14:17:51.842590 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541\": container with ID starting with af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541 not found: ID does not exist" containerID="af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541" Nov 26 14:17:51 crc kubenswrapper[4695]: I1126 14:17:51.842630 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541"} err="failed to get container status \"af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541\": rpc error: code = NotFound desc = could not find container \"af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541\": container with ID starting with af24ca6f04d3ea8789af5cbdaa0d2e8949da3f428de2434a66ab5f535bac4541 not found: ID does not exist" Nov 26 14:17:53 crc kubenswrapper[4695]: I1126 14:17:53.177321 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" path="/var/lib/kubelet/pods/e4bdd66b-e095-4d81-b47e-b38fca2c3127/volumes" Nov 26 14:18:01 crc kubenswrapper[4695]: I1126 14:18:01.162273 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:18:01 crc kubenswrapper[4695]: E1126 14:18:01.163280 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:18:14 crc kubenswrapper[4695]: I1126 14:18:14.162242 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:18:14 crc kubenswrapper[4695]: E1126 14:18:14.163159 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.751827 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vgjk8"] Nov 26 14:18:19 crc kubenswrapper[4695]: E1126 14:18:19.752671 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="registry-server" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.752687 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="registry-server" Nov 26 14:18:19 crc kubenswrapper[4695]: E1126 14:18:19.752715 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="extract-utilities" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.752725 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="extract-utilities" Nov 26 14:18:19 crc kubenswrapper[4695]: E1126 14:18:19.752751 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="extract-content" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.752759 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="extract-content" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.752987 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4bdd66b-e095-4d81-b47e-b38fca2c3127" containerName="registry-server" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.754627 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.778669 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vgjk8"] Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.814724 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmtbm\" (UniqueName: \"kubernetes.io/projected/f8876723-70e1-4e19-815b-75e19210226b-kube-api-access-vmtbm\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.814947 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-catalog-content\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.815114 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-utilities\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.917225 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-utilities\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.917334 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmtbm\" (UniqueName: \"kubernetes.io/projected/f8876723-70e1-4e19-815b-75e19210226b-kube-api-access-vmtbm\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.917419 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-catalog-content\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.918060 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-catalog-content\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.918242 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-utilities\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:19 crc kubenswrapper[4695]: I1126 14:18:19.938404 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmtbm\" (UniqueName: \"kubernetes.io/projected/f8876723-70e1-4e19-815b-75e19210226b-kube-api-access-vmtbm\") pod \"community-operators-vgjk8\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:20 crc kubenswrapper[4695]: I1126 14:18:20.087027 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:20 crc kubenswrapper[4695]: I1126 14:18:20.645775 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vgjk8"] Nov 26 14:18:20 crc kubenswrapper[4695]: I1126 14:18:20.979200 4695 generic.go:334] "Generic (PLEG): container finished" podID="f8876723-70e1-4e19-815b-75e19210226b" containerID="a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d" exitCode=0 Nov 26 14:18:20 crc kubenswrapper[4695]: I1126 14:18:20.979257 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgjk8" event={"ID":"f8876723-70e1-4e19-815b-75e19210226b","Type":"ContainerDied","Data":"a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d"} Nov 26 14:18:20 crc kubenswrapper[4695]: I1126 14:18:20.979549 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgjk8" event={"ID":"f8876723-70e1-4e19-815b-75e19210226b","Type":"ContainerStarted","Data":"6f9e07d1143243933111fc313a025dc0d3e312c98c44c98da3871f38541672b6"} Nov 26 14:18:21 crc kubenswrapper[4695]: I1126 14:18:21.989033 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgjk8" event={"ID":"f8876723-70e1-4e19-815b-75e19210226b","Type":"ContainerStarted","Data":"39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757"} Nov 26 14:18:23 crc kubenswrapper[4695]: I1126 14:18:23.000204 4695 generic.go:334] "Generic (PLEG): container finished" podID="f8876723-70e1-4e19-815b-75e19210226b" containerID="39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757" exitCode=0 Nov 26 14:18:23 crc kubenswrapper[4695]: I1126 14:18:23.000338 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgjk8" event={"ID":"f8876723-70e1-4e19-815b-75e19210226b","Type":"ContainerDied","Data":"39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757"} Nov 26 14:18:24 crc kubenswrapper[4695]: I1126 14:18:24.010394 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgjk8" event={"ID":"f8876723-70e1-4e19-815b-75e19210226b","Type":"ContainerStarted","Data":"007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d"} Nov 26 14:18:24 crc kubenswrapper[4695]: I1126 14:18:24.039858 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vgjk8" podStartSLOduration=2.498928281 podStartE2EDuration="5.039838767s" podCreationTimestamp="2025-11-26 14:18:19 +0000 UTC" firstStartedPulling="2025-11-26 14:18:20.981468827 +0000 UTC m=+3284.617293909" lastFinishedPulling="2025-11-26 14:18:23.522379323 +0000 UTC m=+3287.158204395" observedRunningTime="2025-11-26 14:18:24.037833543 +0000 UTC m=+3287.673658625" watchObservedRunningTime="2025-11-26 14:18:24.039838767 +0000 UTC m=+3287.675663839" Nov 26 14:18:25 crc kubenswrapper[4695]: I1126 14:18:25.163000 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:18:25 crc kubenswrapper[4695]: E1126 14:18:25.163554 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:18:30 crc kubenswrapper[4695]: I1126 14:18:30.087537 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:30 crc kubenswrapper[4695]: I1126 14:18:30.087884 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:30 crc kubenswrapper[4695]: I1126 14:18:30.140719 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:31 crc kubenswrapper[4695]: I1126 14:18:31.117034 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:31 crc kubenswrapper[4695]: I1126 14:18:31.180678 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vgjk8"] Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.086936 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vgjk8" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="registry-server" containerID="cri-o://007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d" gracePeriod=2 Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.569871 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.674466 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmtbm\" (UniqueName: \"kubernetes.io/projected/f8876723-70e1-4e19-815b-75e19210226b-kube-api-access-vmtbm\") pod \"f8876723-70e1-4e19-815b-75e19210226b\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.674604 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-catalog-content\") pod \"f8876723-70e1-4e19-815b-75e19210226b\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.674664 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-utilities\") pod \"f8876723-70e1-4e19-815b-75e19210226b\" (UID: \"f8876723-70e1-4e19-815b-75e19210226b\") " Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.675756 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-utilities" (OuterVolumeSpecName: "utilities") pod "f8876723-70e1-4e19-815b-75e19210226b" (UID: "f8876723-70e1-4e19-815b-75e19210226b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.680469 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8876723-70e1-4e19-815b-75e19210226b-kube-api-access-vmtbm" (OuterVolumeSpecName: "kube-api-access-vmtbm") pod "f8876723-70e1-4e19-815b-75e19210226b" (UID: "f8876723-70e1-4e19-815b-75e19210226b"). InnerVolumeSpecName "kube-api-access-vmtbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.722610 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8876723-70e1-4e19-815b-75e19210226b" (UID: "f8876723-70e1-4e19-815b-75e19210226b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.776334 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.776383 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8876723-70e1-4e19-815b-75e19210226b-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:18:33 crc kubenswrapper[4695]: I1126 14:18:33.776394 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmtbm\" (UniqueName: \"kubernetes.io/projected/f8876723-70e1-4e19-815b-75e19210226b-kube-api-access-vmtbm\") on node \"crc\" DevicePath \"\"" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.099604 4695 generic.go:334] "Generic (PLEG): container finished" podID="f8876723-70e1-4e19-815b-75e19210226b" containerID="007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d" exitCode=0 Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.099650 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgjk8" event={"ID":"f8876723-70e1-4e19-815b-75e19210226b","Type":"ContainerDied","Data":"007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d"} Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.099706 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vgjk8" event={"ID":"f8876723-70e1-4e19-815b-75e19210226b","Type":"ContainerDied","Data":"6f9e07d1143243933111fc313a025dc0d3e312c98c44c98da3871f38541672b6"} Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.099737 4695 scope.go:117] "RemoveContainer" containerID="007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.099728 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vgjk8" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.140613 4695 scope.go:117] "RemoveContainer" containerID="39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.144003 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vgjk8"] Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.151897 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vgjk8"] Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.171534 4695 scope.go:117] "RemoveContainer" containerID="a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.212491 4695 scope.go:117] "RemoveContainer" containerID="007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d" Nov 26 14:18:34 crc kubenswrapper[4695]: E1126 14:18:34.213040 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d\": container with ID starting with 007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d not found: ID does not exist" containerID="007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.213155 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d"} err="failed to get container status \"007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d\": rpc error: code = NotFound desc = could not find container \"007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d\": container with ID starting with 007ce327a984f48d8033b7074148b3f18c22d0caf0de0966edeb33d8609d400d not found: ID does not exist" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.213246 4695 scope.go:117] "RemoveContainer" containerID="39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757" Nov 26 14:18:34 crc kubenswrapper[4695]: E1126 14:18:34.213727 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757\": container with ID starting with 39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757 not found: ID does not exist" containerID="39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.213756 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757"} err="failed to get container status \"39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757\": rpc error: code = NotFound desc = could not find container \"39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757\": container with ID starting with 39e6d2adac554b4a50ef84fe5f8cdb22d46c931dbf537159607278595b931757 not found: ID does not exist" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.213777 4695 scope.go:117] "RemoveContainer" containerID="a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d" Nov 26 14:18:34 crc kubenswrapper[4695]: E1126 14:18:34.214010 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d\": container with ID starting with a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d not found: ID does not exist" containerID="a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d" Nov 26 14:18:34 crc kubenswrapper[4695]: I1126 14:18:34.214102 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d"} err="failed to get container status \"a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d\": rpc error: code = NotFound desc = could not find container \"a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d\": container with ID starting with a580665d6c6630e1bcf8dd944671d96dda7e7e5464cf2b744e7517dc9dae423d not found: ID does not exist" Nov 26 14:18:35 crc kubenswrapper[4695]: I1126 14:18:35.173338 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8876723-70e1-4e19-815b-75e19210226b" path="/var/lib/kubelet/pods/f8876723-70e1-4e19-815b-75e19210226b/volumes" Nov 26 14:18:39 crc kubenswrapper[4695]: I1126 14:18:39.162673 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:18:40 crc kubenswrapper[4695]: I1126 14:18:40.155893 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"ec5f1785829943a3c37c9a2049d20f31b1c4d5e54ba0f8ebdde38eb736a862b5"} Nov 26 14:21:06 crc kubenswrapper[4695]: I1126 14:21:06.396717 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:21:06 crc kubenswrapper[4695]: I1126 14:21:06.398064 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:21:36 crc kubenswrapper[4695]: I1126 14:21:36.397530 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:21:36 crc kubenswrapper[4695]: I1126 14:21:36.398027 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.376763 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mhjbm"] Nov 26 14:21:43 crc kubenswrapper[4695]: E1126 14:21:43.377576 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="extract-utilities" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.377590 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="extract-utilities" Nov 26 14:21:43 crc kubenswrapper[4695]: E1126 14:21:43.377617 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="extract-content" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.377623 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="extract-content" Nov 26 14:21:43 crc kubenswrapper[4695]: E1126 14:21:43.377638 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="registry-server" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.377644 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="registry-server" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.377846 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8876723-70e1-4e19-815b-75e19210226b" containerName="registry-server" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.379112 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.392237 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhjbm"] Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.487016 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-utilities\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.487288 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-catalog-content\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.487550 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4k8j\" (UniqueName: \"kubernetes.io/projected/1dd28d68-c0bd-4ab5-b538-b71837799fd2-kube-api-access-s4k8j\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.592071 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-catalog-content\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.592148 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4k8j\" (UniqueName: \"kubernetes.io/projected/1dd28d68-c0bd-4ab5-b538-b71837799fd2-kube-api-access-s4k8j\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.592242 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-utilities\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.592673 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-catalog-content\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.592743 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-utilities\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.610952 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4k8j\" (UniqueName: \"kubernetes.io/projected/1dd28d68-c0bd-4ab5-b538-b71837799fd2-kube-api-access-s4k8j\") pod \"certified-operators-mhjbm\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:43 crc kubenswrapper[4695]: I1126 14:21:43.699791 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:44 crc kubenswrapper[4695]: I1126 14:21:44.215963 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mhjbm"] Nov 26 14:21:45 crc kubenswrapper[4695]: I1126 14:21:45.205719 4695 generic.go:334] "Generic (PLEG): container finished" podID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerID="1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67" exitCode=0 Nov 26 14:21:45 crc kubenswrapper[4695]: I1126 14:21:45.205784 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhjbm" event={"ID":"1dd28d68-c0bd-4ab5-b538-b71837799fd2","Type":"ContainerDied","Data":"1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67"} Nov 26 14:21:45 crc kubenswrapper[4695]: I1126 14:21:45.205836 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhjbm" event={"ID":"1dd28d68-c0bd-4ab5-b538-b71837799fd2","Type":"ContainerStarted","Data":"285da2c94a0c87e2780c621a6d1a8960f582f824d51d430ac485904df24a3a54"} Nov 26 14:21:45 crc kubenswrapper[4695]: I1126 14:21:45.208461 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 14:21:48 crc kubenswrapper[4695]: I1126 14:21:48.235185 4695 generic.go:334] "Generic (PLEG): container finished" podID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerID="ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633" exitCode=0 Nov 26 14:21:48 crc kubenswrapper[4695]: I1126 14:21:48.235261 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhjbm" event={"ID":"1dd28d68-c0bd-4ab5-b538-b71837799fd2","Type":"ContainerDied","Data":"ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633"} Nov 26 14:21:49 crc kubenswrapper[4695]: I1126 14:21:49.245559 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhjbm" event={"ID":"1dd28d68-c0bd-4ab5-b538-b71837799fd2","Type":"ContainerStarted","Data":"04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02"} Nov 26 14:21:49 crc kubenswrapper[4695]: I1126 14:21:49.265743 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mhjbm" podStartSLOduration=2.56067955 podStartE2EDuration="6.265727654s" podCreationTimestamp="2025-11-26 14:21:43 +0000 UTC" firstStartedPulling="2025-11-26 14:21:45.208208821 +0000 UTC m=+3488.844033903" lastFinishedPulling="2025-11-26 14:21:48.913256925 +0000 UTC m=+3492.549082007" observedRunningTime="2025-11-26 14:21:49.26372948 +0000 UTC m=+3492.899554562" watchObservedRunningTime="2025-11-26 14:21:49.265727654 +0000 UTC m=+3492.901552736" Nov 26 14:21:53 crc kubenswrapper[4695]: I1126 14:21:53.700520 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:53 crc kubenswrapper[4695]: I1126 14:21:53.702247 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:53 crc kubenswrapper[4695]: I1126 14:21:53.745322 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:54 crc kubenswrapper[4695]: I1126 14:21:54.331991 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:54 crc kubenswrapper[4695]: I1126 14:21:54.379628 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhjbm"] Nov 26 14:21:56 crc kubenswrapper[4695]: I1126 14:21:56.303215 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mhjbm" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="registry-server" containerID="cri-o://04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02" gracePeriod=2 Nov 26 14:21:56 crc kubenswrapper[4695]: I1126 14:21:56.805315 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:56 crc kubenswrapper[4695]: I1126 14:21:56.947980 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4k8j\" (UniqueName: \"kubernetes.io/projected/1dd28d68-c0bd-4ab5-b538-b71837799fd2-kube-api-access-s4k8j\") pod \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " Nov 26 14:21:56 crc kubenswrapper[4695]: I1126 14:21:56.948184 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-catalog-content\") pod \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " Nov 26 14:21:56 crc kubenswrapper[4695]: I1126 14:21:56.948312 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-utilities\") pod \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\" (UID: \"1dd28d68-c0bd-4ab5-b538-b71837799fd2\") " Nov 26 14:21:56 crc kubenswrapper[4695]: I1126 14:21:56.949418 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-utilities" (OuterVolumeSpecName: "utilities") pod "1dd28d68-c0bd-4ab5-b538-b71837799fd2" (UID: "1dd28d68-c0bd-4ab5-b538-b71837799fd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:21:56 crc kubenswrapper[4695]: I1126 14:21:56.953674 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd28d68-c0bd-4ab5-b538-b71837799fd2-kube-api-access-s4k8j" (OuterVolumeSpecName: "kube-api-access-s4k8j") pod "1dd28d68-c0bd-4ab5-b538-b71837799fd2" (UID: "1dd28d68-c0bd-4ab5-b538-b71837799fd2"). InnerVolumeSpecName "kube-api-access-s4k8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.004973 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1dd28d68-c0bd-4ab5-b538-b71837799fd2" (UID: "1dd28d68-c0bd-4ab5-b538-b71837799fd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.050745 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.050780 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4k8j\" (UniqueName: \"kubernetes.io/projected/1dd28d68-c0bd-4ab5-b538-b71837799fd2-kube-api-access-s4k8j\") on node \"crc\" DevicePath \"\"" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.050794 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1dd28d68-c0bd-4ab5-b538-b71837799fd2-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.322132 4695 generic.go:334] "Generic (PLEG): container finished" podID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerID="04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02" exitCode=0 Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.322188 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhjbm" event={"ID":"1dd28d68-c0bd-4ab5-b538-b71837799fd2","Type":"ContainerDied","Data":"04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02"} Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.322221 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mhjbm" event={"ID":"1dd28d68-c0bd-4ab5-b538-b71837799fd2","Type":"ContainerDied","Data":"285da2c94a0c87e2780c621a6d1a8960f582f824d51d430ac485904df24a3a54"} Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.322239 4695 scope.go:117] "RemoveContainer" containerID="04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.322252 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mhjbm" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.351849 4695 scope.go:117] "RemoveContainer" containerID="ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.357973 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mhjbm"] Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.366610 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mhjbm"] Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.375075 4695 scope.go:117] "RemoveContainer" containerID="1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.428028 4695 scope.go:117] "RemoveContainer" containerID="04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02" Nov 26 14:21:57 crc kubenswrapper[4695]: E1126 14:21:57.428587 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02\": container with ID starting with 04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02 not found: ID does not exist" containerID="04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.428630 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02"} err="failed to get container status \"04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02\": rpc error: code = NotFound desc = could not find container \"04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02\": container with ID starting with 04aabe189c7565dd66b4c260618c5741aff9cd50ab5af53a036065bf5b12bc02 not found: ID does not exist" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.428658 4695 scope.go:117] "RemoveContainer" containerID="ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633" Nov 26 14:21:57 crc kubenswrapper[4695]: E1126 14:21:57.428956 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633\": container with ID starting with ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633 not found: ID does not exist" containerID="ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.428977 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633"} err="failed to get container status \"ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633\": rpc error: code = NotFound desc = could not find container \"ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633\": container with ID starting with ecd828502a1b16923b14ef1f0e004686662780048c683b8da9e2cf35d2441633 not found: ID does not exist" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.428993 4695 scope.go:117] "RemoveContainer" containerID="1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67" Nov 26 14:21:57 crc kubenswrapper[4695]: E1126 14:21:57.429506 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67\": container with ID starting with 1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67 not found: ID does not exist" containerID="1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67" Nov 26 14:21:57 crc kubenswrapper[4695]: I1126 14:21:57.429549 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67"} err="failed to get container status \"1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67\": rpc error: code = NotFound desc = could not find container \"1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67\": container with ID starting with 1c2ddf9c41ea0d529003015ddc58bee974daa0d016c93a781625ee728eb42c67 not found: ID does not exist" Nov 26 14:21:59 crc kubenswrapper[4695]: I1126 14:21:59.172265 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" path="/var/lib/kubelet/pods/1dd28d68-c0bd-4ab5-b538-b71837799fd2/volumes" Nov 26 14:22:06 crc kubenswrapper[4695]: I1126 14:22:06.396680 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:22:06 crc kubenswrapper[4695]: I1126 14:22:06.397235 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:22:06 crc kubenswrapper[4695]: I1126 14:22:06.397285 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 14:22:06 crc kubenswrapper[4695]: I1126 14:22:06.398042 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ec5f1785829943a3c37c9a2049d20f31b1c4d5e54ba0f8ebdde38eb736a862b5"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 14:22:06 crc kubenswrapper[4695]: I1126 14:22:06.398101 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://ec5f1785829943a3c37c9a2049d20f31b1c4d5e54ba0f8ebdde38eb736a862b5" gracePeriod=600 Nov 26 14:22:06 crc kubenswrapper[4695]: E1126 14:22:06.585032 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73cbd5f2_751e_49c2_b804_e81b9ca46cd4.slice/crio-ec5f1785829943a3c37c9a2049d20f31b1c4d5e54ba0f8ebdde38eb736a862b5.scope\": RecentStats: unable to find data in memory cache]" Nov 26 14:22:07 crc kubenswrapper[4695]: I1126 14:22:07.442535 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="ec5f1785829943a3c37c9a2049d20f31b1c4d5e54ba0f8ebdde38eb736a862b5" exitCode=0 Nov 26 14:22:07 crc kubenswrapper[4695]: I1126 14:22:07.442603 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"ec5f1785829943a3c37c9a2049d20f31b1c4d5e54ba0f8ebdde38eb736a862b5"} Nov 26 14:22:07 crc kubenswrapper[4695]: I1126 14:22:07.443157 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29"} Nov 26 14:22:07 crc kubenswrapper[4695]: I1126 14:22:07.443183 4695 scope.go:117] "RemoveContainer" containerID="6bba40274031a5ec6ca68c379b8cc0350b05b87badd17d7ac8e4b1146fe218c7" Nov 26 14:24:03 crc kubenswrapper[4695]: I1126 14:24:03.572470 4695 generic.go:334] "Generic (PLEG): container finished" podID="d7930b08-66ca-496a-94a1-b68e2fe60177" containerID="abe370053fe34b2219192d0e56898f50da89a0c8abe3d97751edbff45686f53a" exitCode=0 Nov 26 14:24:03 crc kubenswrapper[4695]: I1126 14:24:03.572560 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d7930b08-66ca-496a-94a1-b68e2fe60177","Type":"ContainerDied","Data":"abe370053fe34b2219192d0e56898f50da89a0c8abe3d97751edbff45686f53a"} Nov 26 14:24:04 crc kubenswrapper[4695]: I1126 14:24:04.903165 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.028677 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config-secret\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.028719 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-workdir\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.028773 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-temporary\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.028921 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.028956 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ca-certs\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.029008 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ssh-key\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.029025 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.029066 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q26c\" (UniqueName: \"kubernetes.io/projected/d7930b08-66ca-496a-94a1-b68e2fe60177-kube-api-access-7q26c\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.029091 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-config-data\") pod \"d7930b08-66ca-496a-94a1-b68e2fe60177\" (UID: \"d7930b08-66ca-496a-94a1-b68e2fe60177\") " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.029986 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.030090 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-config-data" (OuterVolumeSpecName: "config-data") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.034239 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.035287 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7930b08-66ca-496a-94a1-b68e2fe60177-kube-api-access-7q26c" (OuterVolumeSpecName: "kube-api-access-7q26c") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "kube-api-access-7q26c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.035498 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.060027 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.064733 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.084525 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.085097 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d7930b08-66ca-496a-94a1-b68e2fe60177" (UID: "d7930b08-66ca-496a-94a1-b68e2fe60177"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131077 4695 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-config-data\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131106 4695 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131117 4695 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131129 4695 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/d7930b08-66ca-496a-94a1-b68e2fe60177-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131166 4695 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131175 4695 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ca-certs\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131183 4695 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d7930b08-66ca-496a-94a1-b68e2fe60177-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131191 4695 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d7930b08-66ca-496a-94a1-b68e2fe60177-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.131246 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q26c\" (UniqueName: \"kubernetes.io/projected/d7930b08-66ca-496a-94a1-b68e2fe60177-kube-api-access-7q26c\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.151498 4695 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.232745 4695 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.590435 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"d7930b08-66ca-496a-94a1-b68e2fe60177","Type":"ContainerDied","Data":"2aff1dbd2b99b969d915aa33da068a2457d5f9c86e60bf9c020401c675fb1d85"} Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.590471 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Nov 26 14:24:05 crc kubenswrapper[4695]: I1126 14:24:05.590472 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aff1dbd2b99b969d915aa33da068a2457d5f9c86e60bf9c020401c675fb1d85" Nov 26 14:24:06 crc kubenswrapper[4695]: I1126 14:24:06.396963 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:24:06 crc kubenswrapper[4695]: I1126 14:24:06.397278 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.503156 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 26 14:24:14 crc kubenswrapper[4695]: E1126 14:24:14.504243 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="extract-utilities" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.504259 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="extract-utilities" Nov 26 14:24:14 crc kubenswrapper[4695]: E1126 14:24:14.504276 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="registry-server" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.504284 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="registry-server" Nov 26 14:24:14 crc kubenswrapper[4695]: E1126 14:24:14.504313 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7930b08-66ca-496a-94a1-b68e2fe60177" containerName="tempest-tests-tempest-tests-runner" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.504322 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7930b08-66ca-496a-94a1-b68e2fe60177" containerName="tempest-tests-tempest-tests-runner" Nov 26 14:24:14 crc kubenswrapper[4695]: E1126 14:24:14.504361 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="extract-content" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.504369 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="extract-content" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.504625 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7930b08-66ca-496a-94a1-b68e2fe60177" containerName="tempest-tests-tempest-tests-runner" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.504646 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd28d68-c0bd-4ab5-b538-b71837799fd2" containerName="registry-server" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.505399 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.507255 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-nwrms" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.512492 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.616592 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e2a3eda-3b39-4953-b576-ae3652af0195\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.617249 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pj26\" (UniqueName: \"kubernetes.io/projected/5e2a3eda-3b39-4953-b576-ae3652af0195-kube-api-access-6pj26\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e2a3eda-3b39-4953-b576-ae3652af0195\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.718869 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pj26\" (UniqueName: \"kubernetes.io/projected/5e2a3eda-3b39-4953-b576-ae3652af0195-kube-api-access-6pj26\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e2a3eda-3b39-4953-b576-ae3652af0195\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.718945 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e2a3eda-3b39-4953-b576-ae3652af0195\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.719414 4695 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e2a3eda-3b39-4953-b576-ae3652af0195\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.736462 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pj26\" (UniqueName: \"kubernetes.io/projected/5e2a3eda-3b39-4953-b576-ae3652af0195-kube-api-access-6pj26\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e2a3eda-3b39-4953-b576-ae3652af0195\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.743413 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"5e2a3eda-3b39-4953-b576-ae3652af0195\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:14 crc kubenswrapper[4695]: I1126 14:24:14.864152 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Nov 26 14:24:15 crc kubenswrapper[4695]: I1126 14:24:15.288789 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Nov 26 14:24:15 crc kubenswrapper[4695]: I1126 14:24:15.692146 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5e2a3eda-3b39-4953-b576-ae3652af0195","Type":"ContainerStarted","Data":"1eabd18931142e2f424fc07828430ddeab3da3379b7ac1a16b20a03ecd92e752"} Nov 26 14:24:18 crc kubenswrapper[4695]: I1126 14:24:18.724722 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"5e2a3eda-3b39-4953-b576-ae3652af0195","Type":"ContainerStarted","Data":"686831e60fa66078cc106abf1e6b4565059b711f04917741555ec72c7f310ec7"} Nov 26 14:24:18 crc kubenswrapper[4695]: I1126 14:24:18.742539 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.6481815640000002 podStartE2EDuration="4.742517947s" podCreationTimestamp="2025-11-26 14:24:14 +0000 UTC" firstStartedPulling="2025-11-26 14:24:15.294609309 +0000 UTC m=+3638.930434391" lastFinishedPulling="2025-11-26 14:24:18.388945692 +0000 UTC m=+3642.024770774" observedRunningTime="2025-11-26 14:24:18.739922264 +0000 UTC m=+3642.375747376" watchObservedRunningTime="2025-11-26 14:24:18.742517947 +0000 UTC m=+3642.378343029" Nov 26 14:24:36 crc kubenswrapper[4695]: I1126 14:24:36.397466 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:24:36 crc kubenswrapper[4695]: I1126 14:24:36.398408 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.620816 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8mxld/must-gather-qxg6x"] Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.623067 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.626649 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8mxld"/"default-dockercfg-fsdk7" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.627005 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8mxld"/"kube-root-ca.crt" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.629786 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8mxld"/"openshift-service-ca.crt" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.639776 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8mxld/must-gather-qxg6x"] Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.693726 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20e41060-a51b-460d-baef-7b2e118d2a4f-must-gather-output\") pod \"must-gather-qxg6x\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.693810 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6hfb\" (UniqueName: \"kubernetes.io/projected/20e41060-a51b-460d-baef-7b2e118d2a4f-kube-api-access-m6hfb\") pod \"must-gather-qxg6x\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.795724 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20e41060-a51b-460d-baef-7b2e118d2a4f-must-gather-output\") pod \"must-gather-qxg6x\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.796078 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6hfb\" (UniqueName: \"kubernetes.io/projected/20e41060-a51b-460d-baef-7b2e118d2a4f-kube-api-access-m6hfb\") pod \"must-gather-qxg6x\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.796188 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20e41060-a51b-460d-baef-7b2e118d2a4f-must-gather-output\") pod \"must-gather-qxg6x\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.814644 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6hfb\" (UniqueName: \"kubernetes.io/projected/20e41060-a51b-460d-baef-7b2e118d2a4f-kube-api-access-m6hfb\") pod \"must-gather-qxg6x\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:39 crc kubenswrapper[4695]: I1126 14:24:39.946441 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:24:40 crc kubenswrapper[4695]: I1126 14:24:40.433325 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8mxld/must-gather-qxg6x"] Nov 26 14:24:40 crc kubenswrapper[4695]: I1126 14:24:40.925900 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/must-gather-qxg6x" event={"ID":"20e41060-a51b-460d-baef-7b2e118d2a4f","Type":"ContainerStarted","Data":"4ef574d0c432e76d5676226b19b050a932e622277cf440110f4884826a8e03a4"} Nov 26 14:24:46 crc kubenswrapper[4695]: I1126 14:24:46.988104 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/must-gather-qxg6x" event={"ID":"20e41060-a51b-460d-baef-7b2e118d2a4f","Type":"ContainerStarted","Data":"2bb051272c99110814ce9aa6689d87d3d60d7f25fadb71f2d94d39b91bcc087e"} Nov 26 14:24:48 crc kubenswrapper[4695]: I1126 14:24:48.000509 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/must-gather-qxg6x" event={"ID":"20e41060-a51b-460d-baef-7b2e118d2a4f","Type":"ContainerStarted","Data":"11ab25debbadeb0a0a128d0a58e6c05ab092337684f2af7106c5dda56a8c403d"} Nov 26 14:24:48 crc kubenswrapper[4695]: I1126 14:24:48.025048 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8mxld/must-gather-qxg6x" podStartSLOduration=2.866536555 podStartE2EDuration="9.025027202s" podCreationTimestamp="2025-11-26 14:24:39 +0000 UTC" firstStartedPulling="2025-11-26 14:24:40.436771395 +0000 UTC m=+3664.072596477" lastFinishedPulling="2025-11-26 14:24:46.595262042 +0000 UTC m=+3670.231087124" observedRunningTime="2025-11-26 14:24:48.01583553 +0000 UTC m=+3671.651660632" watchObservedRunningTime="2025-11-26 14:24:48.025027202 +0000 UTC m=+3671.660852284" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.314998 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8mxld/crc-debug-z77cd"] Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.316836 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.481797 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bgg\" (UniqueName: \"kubernetes.io/projected/be179f73-cda8-40d3-b444-265c4c779c16-kube-api-access-v4bgg\") pod \"crc-debug-z77cd\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.481967 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be179f73-cda8-40d3-b444-265c4c779c16-host\") pod \"crc-debug-z77cd\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.583794 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bgg\" (UniqueName: \"kubernetes.io/projected/be179f73-cda8-40d3-b444-265c4c779c16-kube-api-access-v4bgg\") pod \"crc-debug-z77cd\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.583864 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be179f73-cda8-40d3-b444-265c4c779c16-host\") pod \"crc-debug-z77cd\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.583945 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be179f73-cda8-40d3-b444-265c4c779c16-host\") pod \"crc-debug-z77cd\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.622463 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bgg\" (UniqueName: \"kubernetes.io/projected/be179f73-cda8-40d3-b444-265c4c779c16-kube-api-access-v4bgg\") pod \"crc-debug-z77cd\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: I1126 14:24:50.642602 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:24:50 crc kubenswrapper[4695]: W1126 14:24:50.716678 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe179f73_cda8_40d3_b444_265c4c779c16.slice/crio-140c2cdff1052610e07186d4dc5a21276adf00ed04ac43e31d4a74bd9049fd9e WatchSource:0}: Error finding container 140c2cdff1052610e07186d4dc5a21276adf00ed04ac43e31d4a74bd9049fd9e: Status 404 returned error can't find the container with id 140c2cdff1052610e07186d4dc5a21276adf00ed04ac43e31d4a74bd9049fd9e Nov 26 14:24:51 crc kubenswrapper[4695]: I1126 14:24:51.023064 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-z77cd" event={"ID":"be179f73-cda8-40d3-b444-265c4c779c16","Type":"ContainerStarted","Data":"140c2cdff1052610e07186d4dc5a21276adf00ed04ac43e31d4a74bd9049fd9e"} Nov 26 14:25:06 crc kubenswrapper[4695]: I1126 14:25:06.397163 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:25:06 crc kubenswrapper[4695]: I1126 14:25:06.398562 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:25:06 crc kubenswrapper[4695]: I1126 14:25:06.398701 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 14:25:06 crc kubenswrapper[4695]: I1126 14:25:06.399487 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 14:25:06 crc kubenswrapper[4695]: I1126 14:25:06.399620 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" gracePeriod=600 Nov 26 14:25:07 crc kubenswrapper[4695]: I1126 14:25:07.166990 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" exitCode=0 Nov 26 14:25:07 crc kubenswrapper[4695]: I1126 14:25:07.171029 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29"} Nov 26 14:25:07 crc kubenswrapper[4695]: I1126 14:25:07.171074 4695 scope.go:117] "RemoveContainer" containerID="ec5f1785829943a3c37c9a2049d20f31b1c4d5e54ba0f8ebdde38eb736a862b5" Nov 26 14:25:07 crc kubenswrapper[4695]: E1126 14:25:07.670506 4695 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Nov 26 14:25:07 crc kubenswrapper[4695]: E1126 14:25:07.670832 4695 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v4bgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-z77cd_openshift-must-gather-8mxld(be179f73-cda8-40d3-b444-265c4c779c16): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 26 14:25:07 crc kubenswrapper[4695]: E1126 14:25:07.672152 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-8mxld/crc-debug-z77cd" podUID="be179f73-cda8-40d3-b444-265c4c779c16" Nov 26 14:25:07 crc kubenswrapper[4695]: E1126 14:25:07.821513 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:25:08 crc kubenswrapper[4695]: I1126 14:25:08.177154 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:25:08 crc kubenswrapper[4695]: E1126 14:25:08.177510 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:25:08 crc kubenswrapper[4695]: E1126 14:25:08.178076 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-8mxld/crc-debug-z77cd" podUID="be179f73-cda8-40d3-b444-265c4c779c16" Nov 26 14:25:21 crc kubenswrapper[4695]: I1126 14:25:21.291262 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-z77cd" event={"ID":"be179f73-cda8-40d3-b444-265c4c779c16","Type":"ContainerStarted","Data":"1b59e90d236c9865765dba96f65f93008592269875570bec77ee0584304059ee"} Nov 26 14:25:21 crc kubenswrapper[4695]: I1126 14:25:21.305268 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8mxld/crc-debug-z77cd" podStartSLOduration=1.26742295 podStartE2EDuration="31.305251314s" podCreationTimestamp="2025-11-26 14:24:50 +0000 UTC" firstStartedPulling="2025-11-26 14:24:50.718549078 +0000 UTC m=+3674.354374160" lastFinishedPulling="2025-11-26 14:25:20.756377442 +0000 UTC m=+3704.392202524" observedRunningTime="2025-11-26 14:25:21.304059616 +0000 UTC m=+3704.939884708" watchObservedRunningTime="2025-11-26 14:25:21.305251314 +0000 UTC m=+3704.941076396" Nov 26 14:25:23 crc kubenswrapper[4695]: I1126 14:25:23.162302 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:25:23 crc kubenswrapper[4695]: E1126 14:25:23.163117 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:25:34 crc kubenswrapper[4695]: I1126 14:25:34.163176 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:25:34 crc kubenswrapper[4695]: E1126 14:25:34.164278 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:25:47 crc kubenswrapper[4695]: I1126 14:25:47.178750 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:25:47 crc kubenswrapper[4695]: E1126 14:25:47.180011 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:26:00 crc kubenswrapper[4695]: I1126 14:26:00.689852 4695 generic.go:334] "Generic (PLEG): container finished" podID="be179f73-cda8-40d3-b444-265c4c779c16" containerID="1b59e90d236c9865765dba96f65f93008592269875570bec77ee0584304059ee" exitCode=0 Nov 26 14:26:00 crc kubenswrapper[4695]: I1126 14:26:00.689947 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-z77cd" event={"ID":"be179f73-cda8-40d3-b444-265c4c779c16","Type":"ContainerDied","Data":"1b59e90d236c9865765dba96f65f93008592269875570bec77ee0584304059ee"} Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.797403 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.831455 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8mxld/crc-debug-z77cd"] Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.839564 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8mxld/crc-debug-z77cd"] Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.901362 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be179f73-cda8-40d3-b444-265c4c779c16-host\") pod \"be179f73-cda8-40d3-b444-265c4c779c16\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.901690 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4bgg\" (UniqueName: \"kubernetes.io/projected/be179f73-cda8-40d3-b444-265c4c779c16-kube-api-access-v4bgg\") pod \"be179f73-cda8-40d3-b444-265c4c779c16\" (UID: \"be179f73-cda8-40d3-b444-265c4c779c16\") " Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.901484 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be179f73-cda8-40d3-b444-265c4c779c16-host" (OuterVolumeSpecName: "host") pod "be179f73-cda8-40d3-b444-265c4c779c16" (UID: "be179f73-cda8-40d3-b444-265c4c779c16"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.902266 4695 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/be179f73-cda8-40d3-b444-265c4c779c16-host\") on node \"crc\" DevicePath \"\"" Nov 26 14:26:01 crc kubenswrapper[4695]: I1126 14:26:01.914653 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be179f73-cda8-40d3-b444-265c4c779c16-kube-api-access-v4bgg" (OuterVolumeSpecName: "kube-api-access-v4bgg") pod "be179f73-cda8-40d3-b444-265c4c779c16" (UID: "be179f73-cda8-40d3-b444-265c4c779c16"). InnerVolumeSpecName "kube-api-access-v4bgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:26:02 crc kubenswrapper[4695]: I1126 14:26:02.004388 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4bgg\" (UniqueName: \"kubernetes.io/projected/be179f73-cda8-40d3-b444-265c4c779c16-kube-api-access-v4bgg\") on node \"crc\" DevicePath \"\"" Nov 26 14:26:02 crc kubenswrapper[4695]: I1126 14:26:02.162646 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:26:02 crc kubenswrapper[4695]: E1126 14:26:02.163172 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:26:02 crc kubenswrapper[4695]: I1126 14:26:02.711215 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="140c2cdff1052610e07186d4dc5a21276adf00ed04ac43e31d4a74bd9049fd9e" Nov 26 14:26:02 crc kubenswrapper[4695]: I1126 14:26:02.711252 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-z77cd" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.020718 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8mxld/crc-debug-45d8d"] Nov 26 14:26:03 crc kubenswrapper[4695]: E1126 14:26:03.021444 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be179f73-cda8-40d3-b444-265c4c779c16" containerName="container-00" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.021457 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="be179f73-cda8-40d3-b444-265c4c779c16" containerName="container-00" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.021745 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="be179f73-cda8-40d3-b444-265c4c779c16" containerName="container-00" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.022447 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.125199 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkdkx\" (UniqueName: \"kubernetes.io/projected/5df2219c-5da3-46ac-af6d-db31c13fcc12-kube-api-access-zkdkx\") pod \"crc-debug-45d8d\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.125324 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5df2219c-5da3-46ac-af6d-db31c13fcc12-host\") pod \"crc-debug-45d8d\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.174003 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be179f73-cda8-40d3-b444-265c4c779c16" path="/var/lib/kubelet/pods/be179f73-cda8-40d3-b444-265c4c779c16/volumes" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.227081 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkdkx\" (UniqueName: \"kubernetes.io/projected/5df2219c-5da3-46ac-af6d-db31c13fcc12-kube-api-access-zkdkx\") pod \"crc-debug-45d8d\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.227419 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5df2219c-5da3-46ac-af6d-db31c13fcc12-host\") pod \"crc-debug-45d8d\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.228201 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5df2219c-5da3-46ac-af6d-db31c13fcc12-host\") pod \"crc-debug-45d8d\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.246641 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkdkx\" (UniqueName: \"kubernetes.io/projected/5df2219c-5da3-46ac-af6d-db31c13fcc12-kube-api-access-zkdkx\") pod \"crc-debug-45d8d\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.343127 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.721904 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-45d8d" event={"ID":"5df2219c-5da3-46ac-af6d-db31c13fcc12","Type":"ContainerStarted","Data":"773de3a4c1e70f0f92c530abcaa46327bcfdb766a680a7273413bf89ba4183a0"} Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.722198 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-45d8d" event={"ID":"5df2219c-5da3-46ac-af6d-db31c13fcc12","Type":"ContainerStarted","Data":"2a8bd1b329d0a5807750ec3f201e9e60f11aaf871fec67032a03db030c5ad5d8"} Nov 26 14:26:03 crc kubenswrapper[4695]: I1126 14:26:03.738943 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8mxld/crc-debug-45d8d" podStartSLOduration=0.738921594 podStartE2EDuration="738.921594ms" podCreationTimestamp="2025-11-26 14:26:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 14:26:03.73345848 +0000 UTC m=+3747.369283562" watchObservedRunningTime="2025-11-26 14:26:03.738921594 +0000 UTC m=+3747.374746676" Nov 26 14:26:04 crc kubenswrapper[4695]: I1126 14:26:04.732072 4695 generic.go:334] "Generic (PLEG): container finished" podID="5df2219c-5da3-46ac-af6d-db31c13fcc12" containerID="773de3a4c1e70f0f92c530abcaa46327bcfdb766a680a7273413bf89ba4183a0" exitCode=0 Nov 26 14:26:04 crc kubenswrapper[4695]: I1126 14:26:04.732116 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-45d8d" event={"ID":"5df2219c-5da3-46ac-af6d-db31c13fcc12","Type":"ContainerDied","Data":"773de3a4c1e70f0f92c530abcaa46327bcfdb766a680a7273413bf89ba4183a0"} Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.861840 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.890432 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8mxld/crc-debug-45d8d"] Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.898688 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8mxld/crc-debug-45d8d"] Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.973863 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkdkx\" (UniqueName: \"kubernetes.io/projected/5df2219c-5da3-46ac-af6d-db31c13fcc12-kube-api-access-zkdkx\") pod \"5df2219c-5da3-46ac-af6d-db31c13fcc12\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.973928 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5df2219c-5da3-46ac-af6d-db31c13fcc12-host\") pod \"5df2219c-5da3-46ac-af6d-db31c13fcc12\" (UID: \"5df2219c-5da3-46ac-af6d-db31c13fcc12\") " Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.974060 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5df2219c-5da3-46ac-af6d-db31c13fcc12-host" (OuterVolumeSpecName: "host") pod "5df2219c-5da3-46ac-af6d-db31c13fcc12" (UID: "5df2219c-5da3-46ac-af6d-db31c13fcc12"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.974447 4695 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5df2219c-5da3-46ac-af6d-db31c13fcc12-host\") on node \"crc\" DevicePath \"\"" Nov 26 14:26:05 crc kubenswrapper[4695]: I1126 14:26:05.979588 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df2219c-5da3-46ac-af6d-db31c13fcc12-kube-api-access-zkdkx" (OuterVolumeSpecName: "kube-api-access-zkdkx") pod "5df2219c-5da3-46ac-af6d-db31c13fcc12" (UID: "5df2219c-5da3-46ac-af6d-db31c13fcc12"). InnerVolumeSpecName "kube-api-access-zkdkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:26:06 crc kubenswrapper[4695]: I1126 14:26:06.076007 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkdkx\" (UniqueName: \"kubernetes.io/projected/5df2219c-5da3-46ac-af6d-db31c13fcc12-kube-api-access-zkdkx\") on node \"crc\" DevicePath \"\"" Nov 26 14:26:06 crc kubenswrapper[4695]: I1126 14:26:06.754450 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8bd1b329d0a5807750ec3f201e9e60f11aaf871fec67032a03db030c5ad5d8" Nov 26 14:26:06 crc kubenswrapper[4695]: I1126 14:26:06.754574 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-45d8d" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.077022 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8mxld/crc-debug-w92zw"] Nov 26 14:26:07 crc kubenswrapper[4695]: E1126 14:26:07.077440 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df2219c-5da3-46ac-af6d-db31c13fcc12" containerName="container-00" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.077451 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df2219c-5da3-46ac-af6d-db31c13fcc12" containerName="container-00" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.077640 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df2219c-5da3-46ac-af6d-db31c13fcc12" containerName="container-00" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.078231 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.173767 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df2219c-5da3-46ac-af6d-db31c13fcc12" path="/var/lib/kubelet/pods/5df2219c-5da3-46ac-af6d-db31c13fcc12/volumes" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.203180 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-host\") pod \"crc-debug-w92zw\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.203307 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vlmn\" (UniqueName: \"kubernetes.io/projected/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-kube-api-access-8vlmn\") pod \"crc-debug-w92zw\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.304870 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-host\") pod \"crc-debug-w92zw\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.305019 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vlmn\" (UniqueName: \"kubernetes.io/projected/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-kube-api-access-8vlmn\") pod \"crc-debug-w92zw\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.305477 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-host\") pod \"crc-debug-w92zw\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.323784 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vlmn\" (UniqueName: \"kubernetes.io/projected/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-kube-api-access-8vlmn\") pod \"crc-debug-w92zw\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.393912 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:07 crc kubenswrapper[4695]: W1126 14:26:07.431627 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode36f5f51_7bb2_4e0b_abc9_f17cf08f56c0.slice/crio-3a469069b254bccf150018c3db7926bfa73364dbf3d3431f96684d2bf2371a58 WatchSource:0}: Error finding container 3a469069b254bccf150018c3db7926bfa73364dbf3d3431f96684d2bf2371a58: Status 404 returned error can't find the container with id 3a469069b254bccf150018c3db7926bfa73364dbf3d3431f96684d2bf2371a58 Nov 26 14:26:07 crc kubenswrapper[4695]: I1126 14:26:07.977159 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-w92zw" event={"ID":"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0","Type":"ContainerStarted","Data":"3a469069b254bccf150018c3db7926bfa73364dbf3d3431f96684d2bf2371a58"} Nov 26 14:26:08 crc kubenswrapper[4695]: I1126 14:26:08.996291 4695 generic.go:334] "Generic (PLEG): container finished" podID="e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0" containerID="7dd04475d6277ad28bf37717d52c61953147e3eaee199ef72e7209c6a0bc92c4" exitCode=0 Nov 26 14:26:08 crc kubenswrapper[4695]: I1126 14:26:08.996396 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/crc-debug-w92zw" event={"ID":"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0","Type":"ContainerDied","Data":"7dd04475d6277ad28bf37717d52c61953147e3eaee199ef72e7209c6a0bc92c4"} Nov 26 14:26:09 crc kubenswrapper[4695]: I1126 14:26:09.041402 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8mxld/crc-debug-w92zw"] Nov 26 14:26:09 crc kubenswrapper[4695]: I1126 14:26:09.049316 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8mxld/crc-debug-w92zw"] Nov 26 14:26:10 crc kubenswrapper[4695]: I1126 14:26:10.126657 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:10 crc kubenswrapper[4695]: I1126 14:26:10.297365 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vlmn\" (UniqueName: \"kubernetes.io/projected/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-kube-api-access-8vlmn\") pod \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " Nov 26 14:26:10 crc kubenswrapper[4695]: I1126 14:26:10.297524 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-host\") pod \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\" (UID: \"e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0\") " Nov 26 14:26:10 crc kubenswrapper[4695]: I1126 14:26:10.297651 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-host" (OuterVolumeSpecName: "host") pod "e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0" (UID: "e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 14:26:10 crc kubenswrapper[4695]: I1126 14:26:10.298448 4695 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-host\") on node \"crc\" DevicePath \"\"" Nov 26 14:26:10 crc kubenswrapper[4695]: I1126 14:26:10.303614 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-kube-api-access-8vlmn" (OuterVolumeSpecName: "kube-api-access-8vlmn") pod "e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0" (UID: "e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0"). InnerVolumeSpecName "kube-api-access-8vlmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:26:10 crc kubenswrapper[4695]: I1126 14:26:10.400660 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vlmn\" (UniqueName: \"kubernetes.io/projected/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0-kube-api-access-8vlmn\") on node \"crc\" DevicePath \"\"" Nov 26 14:26:11 crc kubenswrapper[4695]: I1126 14:26:11.017235 4695 scope.go:117] "RemoveContainer" containerID="7dd04475d6277ad28bf37717d52c61953147e3eaee199ef72e7209c6a0bc92c4" Nov 26 14:26:11 crc kubenswrapper[4695]: I1126 14:26:11.017276 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/crc-debug-w92zw" Nov 26 14:26:11 crc kubenswrapper[4695]: I1126 14:26:11.176315 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0" path="/var/lib/kubelet/pods/e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0/volumes" Nov 26 14:26:13 crc kubenswrapper[4695]: I1126 14:26:13.162294 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:26:13 crc kubenswrapper[4695]: E1126 14:26:13.163407 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.008215 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-766c89cd-88742_4c9406e4-9d5c-429a-94b4-6da4283c3462/barbican-api-log/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.054156 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-766c89cd-88742_4c9406e4-9d5c-429a-94b4-6da4283c3462/barbican-api/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.162632 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:26:26 crc kubenswrapper[4695]: E1126 14:26:26.163367 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.168558 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f5fb9ccb-sctd5_26c9568b-95d0-4b6d-8bed-6da941279a98/barbican-keystone-listener/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.259767 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f5fb9ccb-sctd5_26c9568b-95d0-4b6d-8bed-6da941279a98/barbican-keystone-listener-log/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.379486 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc8f95cf-vpnct_71ec1963-a024-4fc4-a747-3c2ee03603a4/barbican-worker/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.473138 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc8f95cf-vpnct_71ec1963-a024-4fc4-a747-3c2ee03603a4/barbican-worker-log/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.557716 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x_6b85ca84-0932-4ed9-bcc9-883e52f07315/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.684879 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/ceilometer-central-agent/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.715907 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/ceilometer-notification-agent/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.796900 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/proxy-httpd/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.883778 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/sg-core/0.log" Nov 26 14:26:26 crc kubenswrapper[4695]: I1126 14:26:26.930579 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_98045621-506a-4a2b-a135-ed37abdf8de5/cinder-api/0.log" Nov 26 14:26:27 crc kubenswrapper[4695]: I1126 14:26:27.041591 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_98045621-506a-4a2b-a135-ed37abdf8de5/cinder-api-log/0.log" Nov 26 14:26:27 crc kubenswrapper[4695]: I1126 14:26:27.729197 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_65648e25-0d32-4537-9e31-e9ca87f02aea/probe/0.log" Nov 26 14:26:27 crc kubenswrapper[4695]: I1126 14:26:27.753743 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2_960d575b-5f75-45a2-8dbe-dd185d9dc0a0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:27 crc kubenswrapper[4695]: I1126 14:26:27.754451 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_65648e25-0d32-4537-9e31-e9ca87f02aea/cinder-scheduler/0.log" Nov 26 14:26:27 crc kubenswrapper[4695]: I1126 14:26:27.939713 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc_168bfff7-248e-4717-beac-8f7986a5d31e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:27 crc kubenswrapper[4695]: I1126 14:26:27.954306 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-pblxz_86275742-143a-41e5-8029-aa251663c12e/init/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.199960 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-pblxz_86275742-143a-41e5-8029-aa251663c12e/init/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.246898 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4rf24_2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.252826 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-pblxz_86275742-143a-41e5-8029-aa251663c12e/dnsmasq-dns/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.445978 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5a16aeb-231a-4012-9aed-ab91a1fab41e/glance-httpd/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.449453 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5a16aeb-231a-4012-9aed-ab91a1fab41e/glance-log/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.616729 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d83aab2a-dcf3-44a5-9616-e19d698ea43d/glance-log/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.620014 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d83aab2a-dcf3-44a5-9616-e19d698ea43d/glance-httpd/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.782121 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7d4c9c9dbd-9bbnw_3ca1545d-04c5-45f8-8738-f662db77ffba/horizon/0.log" Nov 26 14:26:28 crc kubenswrapper[4695]: I1126 14:26:28.943032 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2jszc_509c6c88-4720-4dcc-b9fc-e50ef40c4a6f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:29 crc kubenswrapper[4695]: I1126 14:26:29.074856 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7d4c9c9dbd-9bbnw_3ca1545d-04c5-45f8-8738-f662db77ffba/horizon-log/0.log" Nov 26 14:26:29 crc kubenswrapper[4695]: I1126 14:26:29.137804 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ljn7l_87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:29 crc kubenswrapper[4695]: I1126 14:26:29.320022 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29402761-sd4pc_ab5969ee-b42f-466b-9087-adf2da1d7785/keystone-cron/0.log" Nov 26 14:26:29 crc kubenswrapper[4695]: I1126 14:26:29.387750 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-776844bc66-7hpvs_4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64/keystone-api/0.log" Nov 26 14:26:29 crc kubenswrapper[4695]: I1126 14:26:29.527488 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f/kube-state-metrics/0.log" Nov 26 14:26:29 crc kubenswrapper[4695]: I1126 14:26:29.615111 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7_76b33613-bb4c-4e62-9574-4372603edc01/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:30 crc kubenswrapper[4695]: I1126 14:26:30.102744 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcbb85b97-9vnqg_9c6bc4a2-d4c9-4bdc-a576-03bf4101b606/neutron-api/0.log" Nov 26 14:26:30 crc kubenswrapper[4695]: I1126 14:26:30.179964 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcbb85b97-9vnqg_9c6bc4a2-d4c9-4bdc-a576-03bf4101b606/neutron-httpd/0.log" Nov 26 14:26:30 crc kubenswrapper[4695]: I1126 14:26:30.238028 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm_7ac47dbe-143a-49da-80b2-e60fc44ebaf4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:30 crc kubenswrapper[4695]: I1126 14:26:30.690989 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5a3fc356-06d7-4e26-bcfb-c610dc6e02be/nova-cell0-conductor-conductor/0.log" Nov 26 14:26:30 crc kubenswrapper[4695]: I1126 14:26:30.790247 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c2fff62-d355-4448-a2f8-a2d9f5c13e9a/nova-api-log/0.log" Nov 26 14:26:30 crc kubenswrapper[4695]: I1126 14:26:30.894754 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c2fff62-d355-4448-a2f8-a2d9f5c13e9a/nova-api-api/0.log" Nov 26 14:26:30 crc kubenswrapper[4695]: I1126 14:26:30.989824 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f8537527-a9d6-41b1-b7ad-a281ee216c45/nova-cell1-conductor-conductor/0.log" Nov 26 14:26:31 crc kubenswrapper[4695]: I1126 14:26:31.090009 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0ddfca26-214e-472a-90ca-e0088717125e/nova-cell1-novncproxy-novncproxy/0.log" Nov 26 14:26:31 crc kubenswrapper[4695]: I1126 14:26:31.274538 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-2jzr6_9f43b614-f241-4689-b15b-26bdf3d6e72d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:31 crc kubenswrapper[4695]: I1126 14:26:31.426427 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec2ef622-e87b-4dde-a1ca-81496cfd3562/nova-metadata-log/0.log" Nov 26 14:26:31 crc kubenswrapper[4695]: I1126 14:26:31.649051 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4d67e67d-ff4c-46e4-b0af-9eb1c017bf46/nova-scheduler-scheduler/0.log" Nov 26 14:26:31 crc kubenswrapper[4695]: I1126 14:26:31.730751 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b02f07d7-7406-4602-b166-911408fe8bf0/mysql-bootstrap/0.log" Nov 26 14:26:31 crc kubenswrapper[4695]: I1126 14:26:31.915473 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b02f07d7-7406-4602-b166-911408fe8bf0/galera/0.log" Nov 26 14:26:31 crc kubenswrapper[4695]: I1126 14:26:31.960692 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b02f07d7-7406-4602-b166-911408fe8bf0/mysql-bootstrap/0.log" Nov 26 14:26:32 crc kubenswrapper[4695]: I1126 14:26:32.137987 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_82b6b21a-6ed0-43d7-9763-684eca59aa29/mysql-bootstrap/0.log" Nov 26 14:26:32 crc kubenswrapper[4695]: I1126 14:26:32.333677 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_82b6b21a-6ed0-43d7-9763-684eca59aa29/mysql-bootstrap/0.log" Nov 26 14:26:32 crc kubenswrapper[4695]: I1126 14:26:32.393438 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_82b6b21a-6ed0-43d7-9763-684eca59aa29/galera/0.log" Nov 26 14:26:32 crc kubenswrapper[4695]: I1126 14:26:32.551434 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec2ef622-e87b-4dde-a1ca-81496cfd3562/nova-metadata-metadata/0.log" Nov 26 14:26:32 crc kubenswrapper[4695]: I1126 14:26:32.599634 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d282a7dc-4e06-4e82-8b99-ce6f8416c5cc/openstackclient/0.log" Nov 26 14:26:33 crc kubenswrapper[4695]: I1126 14:26:33.033946 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5cm6x_22e7a6af-7195-45fd-979b-4af39f3cfb62/openstack-network-exporter/0.log" Nov 26 14:26:33 crc kubenswrapper[4695]: I1126 14:26:33.138742 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovsdb-server-init/0.log" Nov 26 14:26:33 crc kubenswrapper[4695]: I1126 14:26:33.296199 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovsdb-server/0.log" Nov 26 14:26:33 crc kubenswrapper[4695]: I1126 14:26:33.326826 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovsdb-server-init/0.log" Nov 26 14:26:33 crc kubenswrapper[4695]: I1126 14:26:33.406193 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovs-vswitchd/0.log" Nov 26 14:26:33 crc kubenswrapper[4695]: I1126 14:26:33.524340 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zvx8d_9f98833b-dbaf-42bc-a424-8094e025ce87/ovn-controller/0.log" Nov 26 14:26:33 crc kubenswrapper[4695]: I1126 14:26:33.716048 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-l77jw_72410dcc-406c-43d5-bc58-320471e9df04/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.175334 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26637d33-5a10-4201-b728-2a250279651b/openstack-network-exporter/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.216062 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26637d33-5a10-4201-b728-2a250279651b/ovn-northd/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.300896 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8/openstack-network-exporter/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.396000 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8/ovsdbserver-nb/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.499804 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b28b52fd-d5e1-44b4-af26-9fa98d731335/ovsdbserver-sb/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.544730 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b28b52fd-d5e1-44b4-af26-9fa98d731335/openstack-network-exporter/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.765944 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-666cd5b87b-cmnl9_2959a379-6a03-4c8d-b022-47e69ac7636d/placement-log/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.810257 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e7335d8e-0d9a-4532-9f5b-d91cafe38ca7/setup-container/0.log" Nov 26 14:26:34 crc kubenswrapper[4695]: I1126 14:26:34.843176 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-666cd5b87b-cmnl9_2959a379-6a03-4c8d-b022-47e69ac7636d/placement-api/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.083441 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e7335d8e-0d9a-4532-9f5b-d91cafe38ca7/rabbitmq/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.087470 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e7335d8e-0d9a-4532-9f5b-d91cafe38ca7/setup-container/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.094922 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_51da5818-5d05-4f99-84a7-93eae660a8a7/setup-container/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.311299 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_51da5818-5d05-4f99-84a7-93eae660a8a7/setup-container/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.333698 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_51da5818-5d05-4f99-84a7-93eae660a8a7/rabbitmq/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.432825 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh_fbe77fc2-bcee-446d-a02c-5a992ab5dcae/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.526811 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2pppt_6d8a0921-3704-485a-8ee9-c6250fd2d59e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.671376 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78_d8407b26-4534-4252-bccf-4e82cea0cd6e/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.826291 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-2twlt_d77b20d3-631b-481b-b480-226968d0b73c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:35 crc kubenswrapper[4695]: I1126 14:26:35.986035 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-76hkl_08ef121f-97dc-4e9e-a466-438d25f2391e/ssh-known-hosts-edpm-deployment/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.231857 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7d6d6689f5-n925b_0cb53a75-c198-433b-b342-7acf8ed7dc0c/proxy-server/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.240527 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7d6d6689f5-n925b_0cb53a75-c198-433b-b342-7acf8ed7dc0c/proxy-httpd/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.434429 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-n2464_bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205/swift-ring-rebalance/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.455297 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-auditor/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.501391 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-reaper/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.670748 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-replicator/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.703269 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-server/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.721735 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-auditor/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.824376 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-replicator/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.901882 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-updater/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.950238 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-server/0.log" Nov 26 14:26:36 crc kubenswrapper[4695]: I1126 14:26:36.981192 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-auditor/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.059287 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-expirer/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.153840 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-server/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.175723 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:26:37 crc kubenswrapper[4695]: E1126 14:26:37.176102 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.190731 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-replicator/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.208512 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-updater/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.312759 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/rsync/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.372756 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/swift-recon-cron/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.507032 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs_0fdaed7b-61f1-4840-88c7-f997a45a27ca/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.575390 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d7930b08-66ca-496a-94a1-b68e2fe60177/tempest-tests-tempest-tests-runner/0.log" Nov 26 14:26:37 crc kubenswrapper[4695]: I1126 14:26:37.984634 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5e2a3eda-3b39-4953-b576-ae3652af0195/test-operator-logs-container/0.log" Nov 26 14:26:38 crc kubenswrapper[4695]: I1126 14:26:38.048860 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jbnll_354489f4-e2ae-4a52-8708-5c495c729662/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:26:46 crc kubenswrapper[4695]: I1126 14:26:46.064969 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_95ed69b1-d83c-4967-a627-6e52dc6da41b/memcached/0.log" Nov 26 14:26:48 crc kubenswrapper[4695]: I1126 14:26:48.162917 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:26:48 crc kubenswrapper[4695]: E1126 14:26:48.163514 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:27:02 crc kubenswrapper[4695]: I1126 14:27:02.580602 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/util/0.log" Nov 26 14:27:02 crc kubenswrapper[4695]: I1126 14:27:02.764337 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/pull/0.log" Nov 26 14:27:02 crc kubenswrapper[4695]: I1126 14:27:02.792261 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/util/0.log" Nov 26 14:27:02 crc kubenswrapper[4695]: I1126 14:27:02.793783 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/pull/0.log" Nov 26 14:27:02 crc kubenswrapper[4695]: I1126 14:27:02.949015 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/pull/0.log" Nov 26 14:27:02 crc kubenswrapper[4695]: I1126 14:27:02.979099 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/util/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.033693 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/extract/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.149741 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-xtp7h_a487eafc-c65d-4ce9-b801-e489882a4dfa/kube-rbac-proxy/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.162042 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:27:03 crc kubenswrapper[4695]: E1126 14:27:03.162282 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.205224 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-xtp7h_a487eafc-c65d-4ce9-b801-e489882a4dfa/manager/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.258937 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-qbchs_868435aa-9f77-46df-af13-ae24b16dee14/kube-rbac-proxy/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.412772 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-qbchs_868435aa-9f77-46df-af13-ae24b16dee14/manager/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.503607 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-twcbz_dbf58d06-6729-4a3e-8682-641649f1ecd2/manager/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.505770 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-twcbz_dbf58d06-6729-4a3e-8682-641649f1ecd2/kube-rbac-proxy/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.647570 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-6t784_490f30b9-4b79-4a35-a77d-44c8a90b5dcf/kube-rbac-proxy/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.747324 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-6t784_490f30b9-4b79-4a35-a77d-44c8a90b5dcf/manager/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.807043 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-s22qh_f76ff94a-b97d-4bbc-bc03-3b8df6d35095/kube-rbac-proxy/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.837295 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-s22qh_f76ff94a-b97d-4bbc-bc03-3b8df6d35095/manager/0.log" Nov 26 14:27:03 crc kubenswrapper[4695]: I1126 14:27:03.935137 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-mljjd_d71ecb02-382d-4fde-b349-343c97f769fd/kube-rbac-proxy/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.070876 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-mljjd_d71ecb02-382d-4fde-b349-343c97f769fd/manager/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.113252 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-v2jnr_c66a73b5-1103-497f-87a6-70d964111fc9/kube-rbac-proxy/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.261359 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-v2jnr_c66a73b5-1103-497f-87a6-70d964111fc9/manager/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.317512 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-vk52w_d84948ad-d0e9-4d86-97a7-1a0d9e13d858/kube-rbac-proxy/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.359430 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-vk52w_d84948ad-d0e9-4d86-97a7-1a0d9e13d858/manager/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.500761 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ndnf7_8d102669-be66-4bc6-8328-3e7d8a66f4c1/kube-rbac-proxy/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.572072 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ndnf7_8d102669-be66-4bc6-8328-3e7d8a66f4c1/manager/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.658632 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-n8tkz_fcca3fad-5da8-4242-894e-9dd5917f3828/kube-rbac-proxy/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.729241 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-n8tkz_fcca3fad-5da8-4242-894e-9dd5917f3828/manager/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.826067 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-28gfw_e1127f2e-e8b5-4002-9f8b-7f3a286640ba/kube-rbac-proxy/0.log" Nov 26 14:27:04 crc kubenswrapper[4695]: I1126 14:27:04.905739 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-28gfw_e1127f2e-e8b5-4002-9f8b-7f3a286640ba/manager/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.033299 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-rbpf5_7e515c4b-ebc1-42fc-a3b9-406552e7f797/kube-rbac-proxy/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.079652 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-rbpf5_7e515c4b-ebc1-42fc-a3b9-406552e7f797/manager/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.178165 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-5qjxh_5c7bfa9c-0c31-4ece-915a-c4e4d37fadad/kube-rbac-proxy/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.296802 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-5qjxh_5c7bfa9c-0c31-4ece-915a-c4e4d37fadad/manager/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.519009 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-s9mbz_51be52bb-362c-4b52-9962-a5e6b3e9dddb/manager/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.610653 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-s9mbz_51be52bb-362c-4b52-9962-a5e6b3e9dddb/kube-rbac-proxy/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.691891 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f_071300a7-9f99-4e3f-8fd7-ceabb7ba738d/kube-rbac-proxy/0.log" Nov 26 14:27:05 crc kubenswrapper[4695]: I1126 14:27:05.750731 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f_071300a7-9f99-4e3f-8fd7-ceabb7ba738d/manager/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.084273 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-78fd744894-tc5nj_301c123c-e342-4fda-b713-03954d29dd4a/operator/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.294846 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-r56l2_281f751a-55f6-4753-8014-8e52bd983a45/registry-server/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.408095 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-chvvt_16e51188-65ad-4a0d-a571-5f02e38d68b6/kube-rbac-proxy/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.574358 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-chvvt_16e51188-65ad-4a0d-a571-5f02e38d68b6/manager/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.635234 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-j27xp_ba47d9b1-160d-40da-a691-db4b4e2557d5/kube-rbac-proxy/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.830605 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-j27xp_ba47d9b1-160d-40da-a691-db4b4e2557d5/manager/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.903677 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-z2pb5_375f27a9-421e-422e-baee-6d5ac575788a/operator/0.log" Nov 26 14:27:06 crc kubenswrapper[4695]: I1126 14:27:06.996835 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bcd57bc9d-w2r7r_4c653020-2777-48e3-b06f-b33a61aabc36/manager/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.056658 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-t7sqd_b044c065-4ba3-4390-88c9-340e2fc1ba2f/kube-rbac-proxy/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.102463 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-t7sqd_b044c065-4ba3-4390-88c9-340e2fc1ba2f/manager/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.206176 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vdrzg_92f90071-d080-4579-9a87-aef8e8b760d3/kube-rbac-proxy/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.270916 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vdrzg_92f90071-d080-4579-9a87-aef8e8b760d3/manager/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.314779 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-z6584_b87cee59-5442-46a0-b5d2-8467196ceedf/kube-rbac-proxy/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.370614 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-z6584_b87cee59-5442-46a0-b5d2-8467196ceedf/manager/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.472773 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-jsmdt_9716d6f2-3c85-4d5d-a261-966d0e6d6dfc/kube-rbac-proxy/0.log" Nov 26 14:27:07 crc kubenswrapper[4695]: I1126 14:27:07.675543 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-jsmdt_9716d6f2-3c85-4d5d-a261-966d0e6d6dfc/manager/0.log" Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.797526 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dgrxh"] Nov 26 14:27:11 crc kubenswrapper[4695]: E1126 14:27:11.798548 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0" containerName="container-00" Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.798566 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0" containerName="container-00" Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.798842 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="e36f5f51-7bb2-4e0b-abc9-f17cf08f56c0" containerName="container-00" Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.800253 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.815198 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgrxh"] Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.915341 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-catalog-content\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.915428 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtgxl\" (UniqueName: \"kubernetes.io/projected/7436ac5d-1749-4787-93ad-2ea7c522614c-kube-api-access-wtgxl\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:11 crc kubenswrapper[4695]: I1126 14:27:11.915793 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-utilities\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:12 crc kubenswrapper[4695]: I1126 14:27:12.017221 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-catalog-content\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:12 crc kubenswrapper[4695]: I1126 14:27:12.017284 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtgxl\" (UniqueName: \"kubernetes.io/projected/7436ac5d-1749-4787-93ad-2ea7c522614c-kube-api-access-wtgxl\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:12 crc kubenswrapper[4695]: I1126 14:27:12.017460 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-utilities\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:12 crc kubenswrapper[4695]: I1126 14:27:12.017998 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-utilities\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:12 crc kubenswrapper[4695]: I1126 14:27:12.018271 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-catalog-content\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:12 crc kubenswrapper[4695]: I1126 14:27:12.050835 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtgxl\" (UniqueName: \"kubernetes.io/projected/7436ac5d-1749-4787-93ad-2ea7c522614c-kube-api-access-wtgxl\") pod \"redhat-operators-dgrxh\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:12 crc kubenswrapper[4695]: I1126 14:27:12.125387 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:13 crc kubenswrapper[4695]: I1126 14:27:13.915144 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dgrxh"] Nov 26 14:27:14 crc kubenswrapper[4695]: I1126 14:27:14.724547 4695 generic.go:334] "Generic (PLEG): container finished" podID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerID="4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f" exitCode=0 Nov 26 14:27:14 crc kubenswrapper[4695]: I1126 14:27:14.724612 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrxh" event={"ID":"7436ac5d-1749-4787-93ad-2ea7c522614c","Type":"ContainerDied","Data":"4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f"} Nov 26 14:27:14 crc kubenswrapper[4695]: I1126 14:27:14.724931 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrxh" event={"ID":"7436ac5d-1749-4787-93ad-2ea7c522614c","Type":"ContainerStarted","Data":"df6bc83ea032f255c792b1ec5151c279e61b37e45533488a599b059af6189d26"} Nov 26 14:27:14 crc kubenswrapper[4695]: I1126 14:27:14.727105 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 14:27:16 crc kubenswrapper[4695]: I1126 14:27:16.741566 4695 generic.go:334] "Generic (PLEG): container finished" podID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerID="c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3" exitCode=0 Nov 26 14:27:16 crc kubenswrapper[4695]: I1126 14:27:16.741615 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrxh" event={"ID":"7436ac5d-1749-4787-93ad-2ea7c522614c","Type":"ContainerDied","Data":"c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3"} Nov 26 14:27:17 crc kubenswrapper[4695]: I1126 14:27:17.758155 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrxh" event={"ID":"7436ac5d-1749-4787-93ad-2ea7c522614c","Type":"ContainerStarted","Data":"9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3"} Nov 26 14:27:17 crc kubenswrapper[4695]: I1126 14:27:17.783182 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dgrxh" podStartSLOduration=4.364100833 podStartE2EDuration="6.783163087s" podCreationTimestamp="2025-11-26 14:27:11 +0000 UTC" firstStartedPulling="2025-11-26 14:27:14.726874416 +0000 UTC m=+3818.362699498" lastFinishedPulling="2025-11-26 14:27:17.14593667 +0000 UTC m=+3820.781761752" observedRunningTime="2025-11-26 14:27:17.776813675 +0000 UTC m=+3821.412638757" watchObservedRunningTime="2025-11-26 14:27:17.783163087 +0000 UTC m=+3821.418988169" Nov 26 14:27:18 crc kubenswrapper[4695]: I1126 14:27:18.162726 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:27:18 crc kubenswrapper[4695]: E1126 14:27:18.163214 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:27:22 crc kubenswrapper[4695]: I1126 14:27:22.129005 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:22 crc kubenswrapper[4695]: I1126 14:27:22.129575 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:23 crc kubenswrapper[4695]: I1126 14:27:23.173956 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dgrxh" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="registry-server" probeResult="failure" output=< Nov 26 14:27:23 crc kubenswrapper[4695]: timeout: failed to connect service ":50051" within 1s Nov 26 14:27:23 crc kubenswrapper[4695]: > Nov 26 14:27:26 crc kubenswrapper[4695]: I1126 14:27:26.636488 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5sbb9_7645629b-9409-48bb-94cc-a63e3bd1fe4b/control-plane-machine-set-operator/0.log" Nov 26 14:27:26 crc kubenswrapper[4695]: I1126 14:27:26.811322 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-72w4t_6f93277a-4f74-4839-8b28-2ff1bfd6f7ca/machine-api-operator/0.log" Nov 26 14:27:26 crc kubenswrapper[4695]: I1126 14:27:26.823648 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-72w4t_6f93277a-4f74-4839-8b28-2ff1bfd6f7ca/kube-rbac-proxy/0.log" Nov 26 14:27:30 crc kubenswrapper[4695]: I1126 14:27:30.162544 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:27:30 crc kubenswrapper[4695]: E1126 14:27:30.163264 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:27:32 crc kubenswrapper[4695]: I1126 14:27:32.190173 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:32 crc kubenswrapper[4695]: I1126 14:27:32.247494 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:32 crc kubenswrapper[4695]: I1126 14:27:32.422013 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgrxh"] Nov 26 14:27:33 crc kubenswrapper[4695]: I1126 14:27:33.881920 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dgrxh" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="registry-server" containerID="cri-o://9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3" gracePeriod=2 Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.370142 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.538320 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-utilities\") pod \"7436ac5d-1749-4787-93ad-2ea7c522614c\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.538504 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtgxl\" (UniqueName: \"kubernetes.io/projected/7436ac5d-1749-4787-93ad-2ea7c522614c-kube-api-access-wtgxl\") pod \"7436ac5d-1749-4787-93ad-2ea7c522614c\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.538551 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-catalog-content\") pod \"7436ac5d-1749-4787-93ad-2ea7c522614c\" (UID: \"7436ac5d-1749-4787-93ad-2ea7c522614c\") " Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.539069 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-utilities" (OuterVolumeSpecName: "utilities") pod "7436ac5d-1749-4787-93ad-2ea7c522614c" (UID: "7436ac5d-1749-4787-93ad-2ea7c522614c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.544491 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7436ac5d-1749-4787-93ad-2ea7c522614c-kube-api-access-wtgxl" (OuterVolumeSpecName: "kube-api-access-wtgxl") pod "7436ac5d-1749-4787-93ad-2ea7c522614c" (UID: "7436ac5d-1749-4787-93ad-2ea7c522614c"). InnerVolumeSpecName "kube-api-access-wtgxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.625151 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7436ac5d-1749-4787-93ad-2ea7c522614c" (UID: "7436ac5d-1749-4787-93ad-2ea7c522614c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.640821 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.640856 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtgxl\" (UniqueName: \"kubernetes.io/projected/7436ac5d-1749-4787-93ad-2ea7c522614c-kube-api-access-wtgxl\") on node \"crc\" DevicePath \"\"" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.640869 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7436ac5d-1749-4787-93ad-2ea7c522614c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.896955 4695 generic.go:334] "Generic (PLEG): container finished" podID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerID="9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3" exitCode=0 Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.897944 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrxh" event={"ID":"7436ac5d-1749-4787-93ad-2ea7c522614c","Type":"ContainerDied","Data":"9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3"} Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.898090 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dgrxh" event={"ID":"7436ac5d-1749-4787-93ad-2ea7c522614c","Type":"ContainerDied","Data":"df6bc83ea032f255c792b1ec5151c279e61b37e45533488a599b059af6189d26"} Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.898124 4695 scope.go:117] "RemoveContainer" containerID="9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.898394 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dgrxh" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.922696 4695 scope.go:117] "RemoveContainer" containerID="c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.947417 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dgrxh"] Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.953478 4695 scope.go:117] "RemoveContainer" containerID="4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f" Nov 26 14:27:34 crc kubenswrapper[4695]: I1126 14:27:34.958906 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dgrxh"] Nov 26 14:27:34 crc kubenswrapper[4695]: E1126 14:27:34.994057 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7436ac5d_1749_4787_93ad_2ea7c522614c.slice\": RecentStats: unable to find data in memory cache]" Nov 26 14:27:35 crc kubenswrapper[4695]: I1126 14:27:35.009172 4695 scope.go:117] "RemoveContainer" containerID="9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3" Nov 26 14:27:35 crc kubenswrapper[4695]: E1126 14:27:35.010165 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3\": container with ID starting with 9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3 not found: ID does not exist" containerID="9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3" Nov 26 14:27:35 crc kubenswrapper[4695]: I1126 14:27:35.010207 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3"} err="failed to get container status \"9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3\": rpc error: code = NotFound desc = could not find container \"9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3\": container with ID starting with 9e3070741504bdcfe3b1638638ccf81f701bcb7c7b9ea7aed066c0acfc6bf6b3 not found: ID does not exist" Nov 26 14:27:35 crc kubenswrapper[4695]: I1126 14:27:35.010232 4695 scope.go:117] "RemoveContainer" containerID="c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3" Nov 26 14:27:35 crc kubenswrapper[4695]: E1126 14:27:35.010510 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3\": container with ID starting with c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3 not found: ID does not exist" containerID="c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3" Nov 26 14:27:35 crc kubenswrapper[4695]: I1126 14:27:35.010538 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3"} err="failed to get container status \"c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3\": rpc error: code = NotFound desc = could not find container \"c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3\": container with ID starting with c8fc69bcc7621333fa69eb8b425da024f04c8f21a76a1424818413f8717f6ea3 not found: ID does not exist" Nov 26 14:27:35 crc kubenswrapper[4695]: I1126 14:27:35.010555 4695 scope.go:117] "RemoveContainer" containerID="4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f" Nov 26 14:27:35 crc kubenswrapper[4695]: E1126 14:27:35.011706 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f\": container with ID starting with 4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f not found: ID does not exist" containerID="4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f" Nov 26 14:27:35 crc kubenswrapper[4695]: I1126 14:27:35.011736 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f"} err="failed to get container status \"4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f\": rpc error: code = NotFound desc = could not find container \"4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f\": container with ID starting with 4e13041cbf0fb68729ee382134c10a56a883fdac01845e4fa3b02ff34773274f not found: ID does not exist" Nov 26 14:27:35 crc kubenswrapper[4695]: I1126 14:27:35.174511 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" path="/var/lib/kubelet/pods/7436ac5d-1749-4787-93ad-2ea7c522614c/volumes" Nov 26 14:27:38 crc kubenswrapper[4695]: I1126 14:27:38.571793 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-dsj6h_cff7ef71-26ea-4334-b5b0-9d6c931d6fff/cert-manager-controller/0.log" Nov 26 14:27:38 crc kubenswrapper[4695]: I1126 14:27:38.707110 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-g67gn_f548c3df-1cb9-4e28-af35-4471c3633b76/cert-manager-cainjector/0.log" Nov 26 14:27:38 crc kubenswrapper[4695]: I1126 14:27:38.819896 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-nczqg_221fa061-b961-4fb4-b8bb-e280873ce253/cert-manager-webhook/0.log" Nov 26 14:27:41 crc kubenswrapper[4695]: I1126 14:27:41.163129 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:27:41 crc kubenswrapper[4695]: E1126 14:27:41.166130 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:27:50 crc kubenswrapper[4695]: I1126 14:27:50.456365 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-4jrpz_2c957cde-292c-4ede-a2c3-dd684372157e/nmstate-console-plugin/0.log" Nov 26 14:27:50 crc kubenswrapper[4695]: I1126 14:27:50.682451 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-72fwr_11fb4b6b-098b-49d0-884e-460720ddfcd5/nmstate-handler/0.log" Nov 26 14:27:50 crc kubenswrapper[4695]: I1126 14:27:50.760047 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-9trxh_e52265d8-4340-457f-824f-c593dc560e5b/nmstate-metrics/0.log" Nov 26 14:27:50 crc kubenswrapper[4695]: I1126 14:27:50.761441 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-9trxh_e52265d8-4340-457f-824f-c593dc560e5b/kube-rbac-proxy/0.log" Nov 26 14:27:50 crc kubenswrapper[4695]: I1126 14:27:50.922607 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-ttqk2_326ba3c5-ae42-4131-99a0-2ef80841d58b/nmstate-operator/0.log" Nov 26 14:27:50 crc kubenswrapper[4695]: I1126 14:27:50.989608 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-b7f9t_61ef0d85-8eb9-4241-958b-12c3a4b4a064/nmstate-webhook/0.log" Nov 26 14:27:56 crc kubenswrapper[4695]: I1126 14:27:56.162014 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:27:56 crc kubenswrapper[4695]: E1126 14:27:56.162790 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.225201 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7h5k9_8c2eab4a-4615-4dce-a0bb-e3316d4e2be9/kube-rbac-proxy/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.443155 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7h5k9_8c2eab4a-4615-4dce-a0bb-e3316d4e2be9/controller/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.457213 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.611293 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.639394 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.649487 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.667648 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.846532 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.855416 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.855960 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:28:04 crc kubenswrapper[4695]: I1126 14:28:04.949119 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.053092 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.070811 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.078475 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.141083 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/controller/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.250247 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/frr-metrics/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.294369 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/kube-rbac-proxy/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.427544 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/kube-rbac-proxy-frr/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.450661 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/reloader/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.631517 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-5sn6p_9b894f31-fadd-4034-a93a-d7767eb59691/frr-k8s-webhook-server/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.773628 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d6f5fdd48-hdrhk_ae146f87-799d-4013-954b-7b3df8521851/manager/0.log" Nov 26 14:28:05 crc kubenswrapper[4695]: I1126 14:28:05.950927 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-868bccc468-mss5n_e92050c8-b486-4429-ad39-f39f154ff06f/webhook-server/0.log" Nov 26 14:28:06 crc kubenswrapper[4695]: I1126 14:28:06.078966 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4b6f5_bc3d7aa1-897c-44e4-a493-4a80ef1142fe/kube-rbac-proxy/0.log" Nov 26 14:28:06 crc kubenswrapper[4695]: I1126 14:28:06.594138 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/frr/0.log" Nov 26 14:28:06 crc kubenswrapper[4695]: I1126 14:28:06.602721 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4b6f5_bc3d7aa1-897c-44e4-a493-4a80ef1142fe/speaker/0.log" Nov 26 14:28:09 crc kubenswrapper[4695]: I1126 14:28:09.163211 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:28:09 crc kubenswrapper[4695]: E1126 14:28:09.163811 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.048544 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/util/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.202278 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/util/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.233221 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/pull/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.308882 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/pull/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.464309 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/util/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.464953 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/pull/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.481472 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/extract/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.602518 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-utilities/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.797501 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-content/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.802172 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-content/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.802638 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-utilities/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.989832 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-utilities/0.log" Nov 26 14:28:18 crc kubenswrapper[4695]: I1126 14:28:18.991196 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-content/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.162694 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-utilities/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.424643 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-content/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.471392 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-utilities/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.477379 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/registry-server/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.516306 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-content/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.652508 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-utilities/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.675467 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-content/0.log" Nov 26 14:28:19 crc kubenswrapper[4695]: I1126 14:28:19.874295 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/util/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.104817 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/pull/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.126470 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/pull/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.134282 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/util/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.162511 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:28:20 crc kubenswrapper[4695]: E1126 14:28:20.162814 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.182463 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/registry-server/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.322503 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/pull/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.344008 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/util/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.373961 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/extract/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.481628 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kgsbp_bfc137bd-03b5-4b18-a610-f713f2681cc1/marketplace-operator/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.554093 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-utilities/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.736457 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-utilities/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.751017 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-content/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.763398 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-content/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.901593 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-content/0.log" Nov 26 14:28:20 crc kubenswrapper[4695]: I1126 14:28:20.917592 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-utilities/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.048452 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/registry-server/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.090021 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-utilities/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.250773 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-utilities/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.268432 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-content/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.293251 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-content/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.476724 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-utilities/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.486634 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-content/0.log" Nov 26 14:28:21 crc kubenswrapper[4695]: I1126 14:28:21.977239 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/registry-server/0.log" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.623917 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qcrr6"] Nov 26 14:28:23 crc kubenswrapper[4695]: E1126 14:28:23.627887 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="extract-content" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.628141 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="extract-content" Nov 26 14:28:23 crc kubenswrapper[4695]: E1126 14:28:23.628590 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="extract-utilities" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.628697 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="extract-utilities" Nov 26 14:28:23 crc kubenswrapper[4695]: E1126 14:28:23.628771 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="registry-server" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.628929 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="registry-server" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.629326 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="7436ac5d-1749-4787-93ad-2ea7c522614c" containerName="registry-server" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.631146 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.646790 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcrr6"] Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.770156 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-utilities\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.770566 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-catalog-content\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.770604 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pwgl\" (UniqueName: \"kubernetes.io/projected/3e42415f-5a2a-4b90-b716-1c3fc4355c27-kube-api-access-6pwgl\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.872489 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-catalog-content\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.872541 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pwgl\" (UniqueName: \"kubernetes.io/projected/3e42415f-5a2a-4b90-b716-1c3fc4355c27-kube-api-access-6pwgl\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.872682 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-utilities\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.873062 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-catalog-content\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.873097 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-utilities\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.905824 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pwgl\" (UniqueName: \"kubernetes.io/projected/3e42415f-5a2a-4b90-b716-1c3fc4355c27-kube-api-access-6pwgl\") pod \"redhat-marketplace-qcrr6\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:23 crc kubenswrapper[4695]: I1126 14:28:23.961811 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.202916 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-btbvr"] Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.205251 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.213694 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btbvr"] Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.282377 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-catalog-content\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.282514 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-utilities\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.282537 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfh85\" (UniqueName: \"kubernetes.io/projected/f5438aac-f336-484d-bd46-9c316711d2f3-kube-api-access-rfh85\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.386963 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-utilities\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.387008 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfh85\" (UniqueName: \"kubernetes.io/projected/f5438aac-f336-484d-bd46-9c316711d2f3-kube-api-access-rfh85\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.387122 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-catalog-content\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.387882 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-utilities\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.387892 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-catalog-content\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.412662 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfh85\" (UniqueName: \"kubernetes.io/projected/f5438aac-f336-484d-bd46-9c316711d2f3-kube-api-access-rfh85\") pod \"community-operators-btbvr\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.444072 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcrr6"] Nov 26 14:28:24 crc kubenswrapper[4695]: I1126 14:28:24.539981 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:25 crc kubenswrapper[4695]: I1126 14:28:25.046260 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-btbvr"] Nov 26 14:28:25 crc kubenswrapper[4695]: W1126 14:28:25.051972 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5438aac_f336_484d_bd46_9c316711d2f3.slice/crio-8950f2da728d9dd1c92e322711274097deb0f3e98cef924c6cc97da2679e2743 WatchSource:0}: Error finding container 8950f2da728d9dd1c92e322711274097deb0f3e98cef924c6cc97da2679e2743: Status 404 returned error can't find the container with id 8950f2da728d9dd1c92e322711274097deb0f3e98cef924c6cc97da2679e2743 Nov 26 14:28:25 crc kubenswrapper[4695]: I1126 14:28:25.318979 4695 generic.go:334] "Generic (PLEG): container finished" podID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerID="6c396bedafa6774a34b9fd58bd9e81bc6ea41956ef289afee9b44aa4e7e50528" exitCode=0 Nov 26 14:28:25 crc kubenswrapper[4695]: I1126 14:28:25.319020 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcrr6" event={"ID":"3e42415f-5a2a-4b90-b716-1c3fc4355c27","Type":"ContainerDied","Data":"6c396bedafa6774a34b9fd58bd9e81bc6ea41956ef289afee9b44aa4e7e50528"} Nov 26 14:28:25 crc kubenswrapper[4695]: I1126 14:28:25.319059 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcrr6" event={"ID":"3e42415f-5a2a-4b90-b716-1c3fc4355c27","Type":"ContainerStarted","Data":"21e3648a9b6da82e89571dc1fe094607ef3e38309123e400607a9e487453cdb3"} Nov 26 14:28:25 crc kubenswrapper[4695]: I1126 14:28:25.320650 4695 generic.go:334] "Generic (PLEG): container finished" podID="f5438aac-f336-484d-bd46-9c316711d2f3" containerID="92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db" exitCode=0 Nov 26 14:28:25 crc kubenswrapper[4695]: I1126 14:28:25.320687 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbvr" event={"ID":"f5438aac-f336-484d-bd46-9c316711d2f3","Type":"ContainerDied","Data":"92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db"} Nov 26 14:28:25 crc kubenswrapper[4695]: I1126 14:28:25.320711 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbvr" event={"ID":"f5438aac-f336-484d-bd46-9c316711d2f3","Type":"ContainerStarted","Data":"8950f2da728d9dd1c92e322711274097deb0f3e98cef924c6cc97da2679e2743"} Nov 26 14:28:27 crc kubenswrapper[4695]: I1126 14:28:27.342470 4695 generic.go:334] "Generic (PLEG): container finished" podID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerID="5d23fefea6bd8341dcc6fa7c0edc88c384ecda5b5e2130742f977191577903f3" exitCode=0 Nov 26 14:28:27 crc kubenswrapper[4695]: I1126 14:28:27.342571 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcrr6" event={"ID":"3e42415f-5a2a-4b90-b716-1c3fc4355c27","Type":"ContainerDied","Data":"5d23fefea6bd8341dcc6fa7c0edc88c384ecda5b5e2130742f977191577903f3"} Nov 26 14:28:27 crc kubenswrapper[4695]: I1126 14:28:27.350647 4695 generic.go:334] "Generic (PLEG): container finished" podID="f5438aac-f336-484d-bd46-9c316711d2f3" containerID="e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42" exitCode=0 Nov 26 14:28:27 crc kubenswrapper[4695]: I1126 14:28:27.350708 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbvr" event={"ID":"f5438aac-f336-484d-bd46-9c316711d2f3","Type":"ContainerDied","Data":"e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42"} Nov 26 14:28:28 crc kubenswrapper[4695]: I1126 14:28:28.370888 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbvr" event={"ID":"f5438aac-f336-484d-bd46-9c316711d2f3","Type":"ContainerStarted","Data":"8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b"} Nov 26 14:28:28 crc kubenswrapper[4695]: I1126 14:28:28.374547 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcrr6" event={"ID":"3e42415f-5a2a-4b90-b716-1c3fc4355c27","Type":"ContainerStarted","Data":"fb809d0bd34f4bd26370dab0d2399ff4f02b8ecf63aeef89fd2913f0496c16db"} Nov 26 14:28:28 crc kubenswrapper[4695]: I1126 14:28:28.400423 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-btbvr" podStartSLOduration=1.966243317 podStartE2EDuration="4.400403231s" podCreationTimestamp="2025-11-26 14:28:24 +0000 UTC" firstStartedPulling="2025-11-26 14:28:25.321637474 +0000 UTC m=+3888.957462556" lastFinishedPulling="2025-11-26 14:28:27.755797388 +0000 UTC m=+3891.391622470" observedRunningTime="2025-11-26 14:28:28.398340866 +0000 UTC m=+3892.034165958" watchObservedRunningTime="2025-11-26 14:28:28.400403231 +0000 UTC m=+3892.036228323" Nov 26 14:28:28 crc kubenswrapper[4695]: I1126 14:28:28.425292 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qcrr6" podStartSLOduration=2.882505429 podStartE2EDuration="5.425270984s" podCreationTimestamp="2025-11-26 14:28:23 +0000 UTC" firstStartedPulling="2025-11-26 14:28:25.32087319 +0000 UTC m=+3888.956698272" lastFinishedPulling="2025-11-26 14:28:27.863638755 +0000 UTC m=+3891.499463827" observedRunningTime="2025-11-26 14:28:28.420610986 +0000 UTC m=+3892.056436068" watchObservedRunningTime="2025-11-26 14:28:28.425270984 +0000 UTC m=+3892.061096066" Nov 26 14:28:31 crc kubenswrapper[4695]: I1126 14:28:31.162245 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:28:31 crc kubenswrapper[4695]: E1126 14:28:31.162797 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:28:33 crc kubenswrapper[4695]: I1126 14:28:33.962237 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:33 crc kubenswrapper[4695]: I1126 14:28:33.962547 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:34 crc kubenswrapper[4695]: I1126 14:28:34.033338 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:34 crc kubenswrapper[4695]: I1126 14:28:34.476913 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:34 crc kubenswrapper[4695]: I1126 14:28:34.524854 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcrr6"] Nov 26 14:28:34 crc kubenswrapper[4695]: I1126 14:28:34.541143 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:34 crc kubenswrapper[4695]: I1126 14:28:34.541200 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:34 crc kubenswrapper[4695]: I1126 14:28:34.589383 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:35 crc kubenswrapper[4695]: I1126 14:28:35.490504 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:36 crc kubenswrapper[4695]: I1126 14:28:36.446163 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qcrr6" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="registry-server" containerID="cri-o://fb809d0bd34f4bd26370dab0d2399ff4f02b8ecf63aeef89fd2913f0496c16db" gracePeriod=2 Nov 26 14:28:36 crc kubenswrapper[4695]: E1126 14:28:36.525511 4695 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e42415f_5a2a_4b90_b716_1c3fc4355c27.slice/crio-fb809d0bd34f4bd26370dab0d2399ff4f02b8ecf63aeef89fd2913f0496c16db.scope\": RecentStats: unable to find data in memory cache]" Nov 26 14:28:36 crc kubenswrapper[4695]: I1126 14:28:36.665583 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btbvr"] Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.484777 4695 generic.go:334] "Generic (PLEG): container finished" podID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerID="fb809d0bd34f4bd26370dab0d2399ff4f02b8ecf63aeef89fd2913f0496c16db" exitCode=0 Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.485132 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-btbvr" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="registry-server" containerID="cri-o://8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b" gracePeriod=2 Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.485436 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcrr6" event={"ID":"3e42415f-5a2a-4b90-b716-1c3fc4355c27","Type":"ContainerDied","Data":"fb809d0bd34f4bd26370dab0d2399ff4f02b8ecf63aeef89fd2913f0496c16db"} Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.689202 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.843284 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-catalog-content\") pod \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.843509 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6pwgl\" (UniqueName: \"kubernetes.io/projected/3e42415f-5a2a-4b90-b716-1c3fc4355c27-kube-api-access-6pwgl\") pod \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.843588 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-utilities\") pod \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\" (UID: \"3e42415f-5a2a-4b90-b716-1c3fc4355c27\") " Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.844677 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-utilities" (OuterVolumeSpecName: "utilities") pod "3e42415f-5a2a-4b90-b716-1c3fc4355c27" (UID: "3e42415f-5a2a-4b90-b716-1c3fc4355c27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.853033 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e42415f-5a2a-4b90-b716-1c3fc4355c27" (UID: "3e42415f-5a2a-4b90-b716-1c3fc4355c27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.854015 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e42415f-5a2a-4b90-b716-1c3fc4355c27-kube-api-access-6pwgl" (OuterVolumeSpecName: "kube-api-access-6pwgl") pod "3e42415f-5a2a-4b90-b716-1c3fc4355c27" (UID: "3e42415f-5a2a-4b90-b716-1c3fc4355c27"). InnerVolumeSpecName "kube-api-access-6pwgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.945299 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6pwgl\" (UniqueName: \"kubernetes.io/projected/3e42415f-5a2a-4b90-b716-1c3fc4355c27-kube-api-access-6pwgl\") on node \"crc\" DevicePath \"\"" Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.945586 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.945595 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e42415f-5a2a-4b90-b716-1c3fc4355c27-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:28:37 crc kubenswrapper[4695]: I1126 14:28:37.950518 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.046864 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-catalog-content\") pod \"f5438aac-f336-484d-bd46-9c316711d2f3\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.047095 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfh85\" (UniqueName: \"kubernetes.io/projected/f5438aac-f336-484d-bd46-9c316711d2f3-kube-api-access-rfh85\") pod \"f5438aac-f336-484d-bd46-9c316711d2f3\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.047143 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-utilities\") pod \"f5438aac-f336-484d-bd46-9c316711d2f3\" (UID: \"f5438aac-f336-484d-bd46-9c316711d2f3\") " Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.048474 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-utilities" (OuterVolumeSpecName: "utilities") pod "f5438aac-f336-484d-bd46-9c316711d2f3" (UID: "f5438aac-f336-484d-bd46-9c316711d2f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.054141 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5438aac-f336-484d-bd46-9c316711d2f3-kube-api-access-rfh85" (OuterVolumeSpecName: "kube-api-access-rfh85") pod "f5438aac-f336-484d-bd46-9c316711d2f3" (UID: "f5438aac-f336-484d-bd46-9c316711d2f3"). InnerVolumeSpecName "kube-api-access-rfh85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.095084 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5438aac-f336-484d-bd46-9c316711d2f3" (UID: "f5438aac-f336-484d-bd46-9c316711d2f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.156609 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.156637 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5438aac-f336-484d-bd46-9c316711d2f3-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.156648 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfh85\" (UniqueName: \"kubernetes.io/projected/f5438aac-f336-484d-bd46-9c316711d2f3-kube-api-access-rfh85\") on node \"crc\" DevicePath \"\"" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.494232 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qcrr6" event={"ID":"3e42415f-5a2a-4b90-b716-1c3fc4355c27","Type":"ContainerDied","Data":"21e3648a9b6da82e89571dc1fe094607ef3e38309123e400607a9e487453cdb3"} Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.494275 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qcrr6" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.494286 4695 scope.go:117] "RemoveContainer" containerID="fb809d0bd34f4bd26370dab0d2399ff4f02b8ecf63aeef89fd2913f0496c16db" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.497333 4695 generic.go:334] "Generic (PLEG): container finished" podID="f5438aac-f336-484d-bd46-9c316711d2f3" containerID="8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b" exitCode=0 Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.497393 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbvr" event={"ID":"f5438aac-f336-484d-bd46-9c316711d2f3","Type":"ContainerDied","Data":"8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b"} Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.497422 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-btbvr" event={"ID":"f5438aac-f336-484d-bd46-9c316711d2f3","Type":"ContainerDied","Data":"8950f2da728d9dd1c92e322711274097deb0f3e98cef924c6cc97da2679e2743"} Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.497485 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-btbvr" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.516063 4695 scope.go:117] "RemoveContainer" containerID="5d23fefea6bd8341dcc6fa7c0edc88c384ecda5b5e2130742f977191577903f3" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.562631 4695 scope.go:117] "RemoveContainer" containerID="6c396bedafa6774a34b9fd58bd9e81bc6ea41956ef289afee9b44aa4e7e50528" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.581538 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcrr6"] Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.588885 4695 scope.go:117] "RemoveContainer" containerID="8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.588906 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qcrr6"] Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.604441 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-btbvr"] Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.605184 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-btbvr"] Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.654490 4695 scope.go:117] "RemoveContainer" containerID="e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.683063 4695 scope.go:117] "RemoveContainer" containerID="92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.720551 4695 scope.go:117] "RemoveContainer" containerID="8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b" Nov 26 14:28:38 crc kubenswrapper[4695]: E1126 14:28:38.721091 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b\": container with ID starting with 8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b not found: ID does not exist" containerID="8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.721127 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b"} err="failed to get container status \"8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b\": rpc error: code = NotFound desc = could not find container \"8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b\": container with ID starting with 8f6f0699bea2fa8d436303949994ae7463b8ada112d1654947f192d52088eb8b not found: ID does not exist" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.721153 4695 scope.go:117] "RemoveContainer" containerID="e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42" Nov 26 14:28:38 crc kubenswrapper[4695]: E1126 14:28:38.721583 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42\": container with ID starting with e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42 not found: ID does not exist" containerID="e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.721615 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42"} err="failed to get container status \"e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42\": rpc error: code = NotFound desc = could not find container \"e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42\": container with ID starting with e0861c93c7e0efa7da6de6982a8375eb5ac67655429b583af510a98092ad0f42 not found: ID does not exist" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.721635 4695 scope.go:117] "RemoveContainer" containerID="92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db" Nov 26 14:28:38 crc kubenswrapper[4695]: E1126 14:28:38.721857 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db\": container with ID starting with 92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db not found: ID does not exist" containerID="92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db" Nov 26 14:28:38 crc kubenswrapper[4695]: I1126 14:28:38.721875 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db"} err="failed to get container status \"92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db\": rpc error: code = NotFound desc = could not find container \"92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db\": container with ID starting with 92390b629364db5755eccdcbf7cf8b1ad5e8c19699646622ad396392172073db not found: ID does not exist" Nov 26 14:28:39 crc kubenswrapper[4695]: I1126 14:28:39.174532 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" path="/var/lib/kubelet/pods/3e42415f-5a2a-4b90-b716-1c3fc4355c27/volumes" Nov 26 14:28:39 crc kubenswrapper[4695]: I1126 14:28:39.176559 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" path="/var/lib/kubelet/pods/f5438aac-f336-484d-bd46-9c316711d2f3/volumes" Nov 26 14:28:44 crc kubenswrapper[4695]: I1126 14:28:44.161988 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:28:44 crc kubenswrapper[4695]: E1126 14:28:44.162748 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:28:55 crc kubenswrapper[4695]: I1126 14:28:55.162309 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:28:55 crc kubenswrapper[4695]: E1126 14:28:55.163628 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:29:06 crc kubenswrapper[4695]: I1126 14:29:06.162623 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:29:06 crc kubenswrapper[4695]: E1126 14:29:06.163833 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:29:20 crc kubenswrapper[4695]: I1126 14:29:20.162088 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:29:20 crc kubenswrapper[4695]: E1126 14:29:20.162790 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:29:31 crc kubenswrapper[4695]: I1126 14:29:31.162565 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:29:31 crc kubenswrapper[4695]: E1126 14:29:31.163376 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:29:42 crc kubenswrapper[4695]: I1126 14:29:42.163396 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:29:42 crc kubenswrapper[4695]: E1126 14:29:42.164235 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:29:54 crc kubenswrapper[4695]: I1126 14:29:54.161666 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:29:54 crc kubenswrapper[4695]: E1126 14:29:54.162466 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.175758 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg"] Nov 26 14:30:00 crc kubenswrapper[4695]: E1126 14:30:00.177028 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="extract-content" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177057 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="extract-content" Nov 26 14:30:00 crc kubenswrapper[4695]: E1126 14:30:00.177080 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="registry-server" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177094 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="registry-server" Nov 26 14:30:00 crc kubenswrapper[4695]: E1126 14:30:00.177128 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="extract-utilities" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177141 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="extract-utilities" Nov 26 14:30:00 crc kubenswrapper[4695]: E1126 14:30:00.177164 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="extract-content" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177177 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="extract-content" Nov 26 14:30:00 crc kubenswrapper[4695]: E1126 14:30:00.177203 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="extract-utilities" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177217 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="extract-utilities" Nov 26 14:30:00 crc kubenswrapper[4695]: E1126 14:30:00.177241 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="registry-server" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177253 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="registry-server" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177592 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5438aac-f336-484d-bd46-9c316711d2f3" containerName="registry-server" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.177648 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e42415f-5a2a-4b90-b716-1c3fc4355c27" containerName="registry-server" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.178801 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.180962 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.181413 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.185966 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg"] Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.288923 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340b0728-3d57-4804-b353-64bcf2a81be0-config-volume\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.289232 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340b0728-3d57-4804-b353-64bcf2a81be0-secret-volume\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.289277 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2596d\" (UniqueName: \"kubernetes.io/projected/340b0728-3d57-4804-b353-64bcf2a81be0-kube-api-access-2596d\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.340725 4695 generic.go:334] "Generic (PLEG): container finished" podID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerID="2bb051272c99110814ce9aa6689d87d3d60d7f25fadb71f2d94d39b91bcc087e" exitCode=0 Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.340785 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8mxld/must-gather-qxg6x" event={"ID":"20e41060-a51b-460d-baef-7b2e118d2a4f","Type":"ContainerDied","Data":"2bb051272c99110814ce9aa6689d87d3d60d7f25fadb71f2d94d39b91bcc087e"} Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.341502 4695 scope.go:117] "RemoveContainer" containerID="2bb051272c99110814ce9aa6689d87d3d60d7f25fadb71f2d94d39b91bcc087e" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.391784 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340b0728-3d57-4804-b353-64bcf2a81be0-config-volume\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.391983 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340b0728-3d57-4804-b353-64bcf2a81be0-secret-volume\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.392093 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2596d\" (UniqueName: \"kubernetes.io/projected/340b0728-3d57-4804-b353-64bcf2a81be0-kube-api-access-2596d\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.393407 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340b0728-3d57-4804-b353-64bcf2a81be0-config-volume\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.522963 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8mxld_must-gather-qxg6x_20e41060-a51b-460d-baef-7b2e118d2a4f/gather/0.log" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.946447 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2596d\" (UniqueName: \"kubernetes.io/projected/340b0728-3d57-4804-b353-64bcf2a81be0-kube-api-access-2596d\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:00 crc kubenswrapper[4695]: I1126 14:30:00.957786 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340b0728-3d57-4804-b353-64bcf2a81be0-secret-volume\") pod \"collect-profiles-29402790-9z9zg\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:01 crc kubenswrapper[4695]: I1126 14:30:01.094186 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:01 crc kubenswrapper[4695]: I1126 14:30:01.619129 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg"] Nov 26 14:30:02 crc kubenswrapper[4695]: I1126 14:30:02.359099 4695 generic.go:334] "Generic (PLEG): container finished" podID="340b0728-3d57-4804-b353-64bcf2a81be0" containerID="4346d5b0ae5fc3f1ee714faeedabd331f371a12b484526816c1dea7cf3db9abd" exitCode=0 Nov 26 14:30:02 crc kubenswrapper[4695]: I1126 14:30:02.359409 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" event={"ID":"340b0728-3d57-4804-b353-64bcf2a81be0","Type":"ContainerDied","Data":"4346d5b0ae5fc3f1ee714faeedabd331f371a12b484526816c1dea7cf3db9abd"} Nov 26 14:30:02 crc kubenswrapper[4695]: I1126 14:30:02.359476 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" event={"ID":"340b0728-3d57-4804-b353-64bcf2a81be0","Type":"ContainerStarted","Data":"bc046c7a89cb9056ca5c84619077476da1ff3e96c12e248305c7ea0fc65a69c4"} Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.840729 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.967852 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340b0728-3d57-4804-b353-64bcf2a81be0-secret-volume\") pod \"340b0728-3d57-4804-b353-64bcf2a81be0\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.968722 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2596d\" (UniqueName: \"kubernetes.io/projected/340b0728-3d57-4804-b353-64bcf2a81be0-kube-api-access-2596d\") pod \"340b0728-3d57-4804-b353-64bcf2a81be0\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.968876 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340b0728-3d57-4804-b353-64bcf2a81be0-config-volume\") pod \"340b0728-3d57-4804-b353-64bcf2a81be0\" (UID: \"340b0728-3d57-4804-b353-64bcf2a81be0\") " Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.969576 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/340b0728-3d57-4804-b353-64bcf2a81be0-config-volume" (OuterVolumeSpecName: "config-volume") pod "340b0728-3d57-4804-b353-64bcf2a81be0" (UID: "340b0728-3d57-4804-b353-64bcf2a81be0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.969835 4695 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/340b0728-3d57-4804-b353-64bcf2a81be0-config-volume\") on node \"crc\" DevicePath \"\"" Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.974252 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/340b0728-3d57-4804-b353-64bcf2a81be0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "340b0728-3d57-4804-b353-64bcf2a81be0" (UID: "340b0728-3d57-4804-b353-64bcf2a81be0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 26 14:30:03 crc kubenswrapper[4695]: I1126 14:30:03.974532 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340b0728-3d57-4804-b353-64bcf2a81be0-kube-api-access-2596d" (OuterVolumeSpecName: "kube-api-access-2596d") pod "340b0728-3d57-4804-b353-64bcf2a81be0" (UID: "340b0728-3d57-4804-b353-64bcf2a81be0"). InnerVolumeSpecName "kube-api-access-2596d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:30:04 crc kubenswrapper[4695]: I1126 14:30:04.071079 4695 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/340b0728-3d57-4804-b353-64bcf2a81be0-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 26 14:30:04 crc kubenswrapper[4695]: I1126 14:30:04.071110 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2596d\" (UniqueName: \"kubernetes.io/projected/340b0728-3d57-4804-b353-64bcf2a81be0-kube-api-access-2596d\") on node \"crc\" DevicePath \"\"" Nov 26 14:30:04 crc kubenswrapper[4695]: I1126 14:30:04.379901 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" event={"ID":"340b0728-3d57-4804-b353-64bcf2a81be0","Type":"ContainerDied","Data":"bc046c7a89cb9056ca5c84619077476da1ff3e96c12e248305c7ea0fc65a69c4"} Nov 26 14:30:04 crc kubenswrapper[4695]: I1126 14:30:04.379946 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29402790-9z9zg" Nov 26 14:30:04 crc kubenswrapper[4695]: I1126 14:30:04.379973 4695 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc046c7a89cb9056ca5c84619077476da1ff3e96c12e248305c7ea0fc65a69c4" Nov 26 14:30:04 crc kubenswrapper[4695]: I1126 14:30:04.914885 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t"] Nov 26 14:30:04 crc kubenswrapper[4695]: I1126 14:30:04.921723 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29402745-8ms7t"] Nov 26 14:30:05 crc kubenswrapper[4695]: I1126 14:30:05.177587 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54dd4fa9-5754-4da9-852d-c6b0ccbcc258" path="/var/lib/kubelet/pods/54dd4fa9-5754-4da9-852d-c6b0ccbcc258/volumes" Nov 26 14:30:06 crc kubenswrapper[4695]: I1126 14:30:06.163295 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:30:06 crc kubenswrapper[4695]: E1126 14:30:06.163638 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.011009 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8mxld/must-gather-qxg6x"] Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.011996 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8mxld/must-gather-qxg6x" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerName="copy" containerID="cri-o://11ab25debbadeb0a0a128d0a58e6c05ab092337684f2af7106c5dda56a8c403d" gracePeriod=2 Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.020173 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8mxld/must-gather-qxg6x"] Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.431470 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8mxld_must-gather-qxg6x_20e41060-a51b-460d-baef-7b2e118d2a4f/copy/0.log" Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.432165 4695 generic.go:334] "Generic (PLEG): container finished" podID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerID="11ab25debbadeb0a0a128d0a58e6c05ab092337684f2af7106c5dda56a8c403d" exitCode=143 Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.583216 4695 scope.go:117] "RemoveContainer" containerID="be7b48c508a868d930fa00045d3f0c6b1b1780d35d972d52901120bd5a4982e6" Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.880898 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8mxld_must-gather-qxg6x_20e41060-a51b-460d-baef-7b2e118d2a4f/copy/0.log" Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.881798 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.983189 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6hfb\" (UniqueName: \"kubernetes.io/projected/20e41060-a51b-460d-baef-7b2e118d2a4f-kube-api-access-m6hfb\") pod \"20e41060-a51b-460d-baef-7b2e118d2a4f\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.983408 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20e41060-a51b-460d-baef-7b2e118d2a4f-must-gather-output\") pod \"20e41060-a51b-460d-baef-7b2e118d2a4f\" (UID: \"20e41060-a51b-460d-baef-7b2e118d2a4f\") " Nov 26 14:30:09 crc kubenswrapper[4695]: I1126 14:30:09.993787 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20e41060-a51b-460d-baef-7b2e118d2a4f-kube-api-access-m6hfb" (OuterVolumeSpecName: "kube-api-access-m6hfb") pod "20e41060-a51b-460d-baef-7b2e118d2a4f" (UID: "20e41060-a51b-460d-baef-7b2e118d2a4f"). InnerVolumeSpecName "kube-api-access-m6hfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:30:10 crc kubenswrapper[4695]: I1126 14:30:10.086478 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6hfb\" (UniqueName: \"kubernetes.io/projected/20e41060-a51b-460d-baef-7b2e118d2a4f-kube-api-access-m6hfb\") on node \"crc\" DevicePath \"\"" Nov 26 14:30:10 crc kubenswrapper[4695]: I1126 14:30:10.132837 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20e41060-a51b-460d-baef-7b2e118d2a4f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "20e41060-a51b-460d-baef-7b2e118d2a4f" (UID: "20e41060-a51b-460d-baef-7b2e118d2a4f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:30:10 crc kubenswrapper[4695]: I1126 14:30:10.188572 4695 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/20e41060-a51b-460d-baef-7b2e118d2a4f-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 26 14:30:10 crc kubenswrapper[4695]: I1126 14:30:10.441146 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8mxld_must-gather-qxg6x_20e41060-a51b-460d-baef-7b2e118d2a4f/copy/0.log" Nov 26 14:30:10 crc kubenswrapper[4695]: I1126 14:30:10.441523 4695 scope.go:117] "RemoveContainer" containerID="11ab25debbadeb0a0a128d0a58e6c05ab092337684f2af7106c5dda56a8c403d" Nov 26 14:30:10 crc kubenswrapper[4695]: I1126 14:30:10.441682 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8mxld/must-gather-qxg6x" Nov 26 14:30:10 crc kubenswrapper[4695]: I1126 14:30:10.486378 4695 scope.go:117] "RemoveContainer" containerID="2bb051272c99110814ce9aa6689d87d3d60d7f25fadb71f2d94d39b91bcc087e" Nov 26 14:30:11 crc kubenswrapper[4695]: I1126 14:30:11.173733 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" path="/var/lib/kubelet/pods/20e41060-a51b-460d-baef-7b2e118d2a4f/volumes" Nov 26 14:30:18 crc kubenswrapper[4695]: I1126 14:30:18.162723 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:30:18 crc kubenswrapper[4695]: I1126 14:30:18.547231 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"ef6fb8c0ab2c4ea2801594a2c8b9b77a9947483ff7104624a94de9342fec41f3"} Nov 26 14:32:09 crc kubenswrapper[4695]: I1126 14:32:09.809602 4695 scope.go:117] "RemoveContainer" containerID="773de3a4c1e70f0f92c530abcaa46327bcfdb766a680a7273413bf89ba4183a0" Nov 26 14:32:09 crc kubenswrapper[4695]: I1126 14:32:09.844632 4695 scope.go:117] "RemoveContainer" containerID="1b59e90d236c9865765dba96f65f93008592269875570bec77ee0584304059ee" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.452200 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-982gv"] Nov 26 14:32:35 crc kubenswrapper[4695]: E1126 14:32:35.453391 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerName="copy" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.453410 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerName="copy" Nov 26 14:32:35 crc kubenswrapper[4695]: E1126 14:32:35.453441 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340b0728-3d57-4804-b353-64bcf2a81be0" containerName="collect-profiles" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.453449 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="340b0728-3d57-4804-b353-64bcf2a81be0" containerName="collect-profiles" Nov 26 14:32:35 crc kubenswrapper[4695]: E1126 14:32:35.453487 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerName="gather" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.453497 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerName="gather" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.453759 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="340b0728-3d57-4804-b353-64bcf2a81be0" containerName="collect-profiles" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.453787 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerName="gather" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.453803 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="20e41060-a51b-460d-baef-7b2e118d2a4f" containerName="copy" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.456149 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.471882 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-982gv"] Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.574377 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-utilities\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.574682 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngpjw\" (UniqueName: \"kubernetes.io/projected/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-kube-api-access-ngpjw\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.574803 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-catalog-content\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.676575 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-utilities\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.676674 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngpjw\" (UniqueName: \"kubernetes.io/projected/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-kube-api-access-ngpjw\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.676729 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-catalog-content\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.677173 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-utilities\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.677259 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-catalog-content\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.697293 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngpjw\" (UniqueName: \"kubernetes.io/projected/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-kube-api-access-ngpjw\") pod \"certified-operators-982gv\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:35 crc kubenswrapper[4695]: I1126 14:32:35.787462 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:36 crc kubenswrapper[4695]: I1126 14:32:36.244974 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-982gv"] Nov 26 14:32:36 crc kubenswrapper[4695]: I1126 14:32:36.396524 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:32:36 crc kubenswrapper[4695]: I1126 14:32:36.396806 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:32:37 crc kubenswrapper[4695]: I1126 14:32:37.020871 4695 generic.go:334] "Generic (PLEG): container finished" podID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerID="822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7" exitCode=0 Nov 26 14:32:37 crc kubenswrapper[4695]: I1126 14:32:37.020911 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982gv" event={"ID":"3bd9fefd-3f8a-4ec7-9407-39996d6ac590","Type":"ContainerDied","Data":"822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7"} Nov 26 14:32:37 crc kubenswrapper[4695]: I1126 14:32:37.020937 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982gv" event={"ID":"3bd9fefd-3f8a-4ec7-9407-39996d6ac590","Type":"ContainerStarted","Data":"1b6fc876f9faed9ab6d29493faca25f59984039e9fd07c2c2495ddc281fa199c"} Nov 26 14:32:37 crc kubenswrapper[4695]: I1126 14:32:37.024663 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 14:32:39 crc kubenswrapper[4695]: I1126 14:32:39.046669 4695 generic.go:334] "Generic (PLEG): container finished" podID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerID="b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5" exitCode=0 Nov 26 14:32:39 crc kubenswrapper[4695]: I1126 14:32:39.046828 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982gv" event={"ID":"3bd9fefd-3f8a-4ec7-9407-39996d6ac590","Type":"ContainerDied","Data":"b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5"} Nov 26 14:32:40 crc kubenswrapper[4695]: I1126 14:32:40.059202 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982gv" event={"ID":"3bd9fefd-3f8a-4ec7-9407-39996d6ac590","Type":"ContainerStarted","Data":"c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc"} Nov 26 14:32:40 crc kubenswrapper[4695]: I1126 14:32:40.079409 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-982gv" podStartSLOduration=2.355109923 podStartE2EDuration="5.079391749s" podCreationTimestamp="2025-11-26 14:32:35 +0000 UTC" firstStartedPulling="2025-11-26 14:32:37.024442622 +0000 UTC m=+4140.660267704" lastFinishedPulling="2025-11-26 14:32:39.748724448 +0000 UTC m=+4143.384549530" observedRunningTime="2025-11-26 14:32:40.077032184 +0000 UTC m=+4143.712857266" watchObservedRunningTime="2025-11-26 14:32:40.079391749 +0000 UTC m=+4143.715216831" Nov 26 14:32:45 crc kubenswrapper[4695]: I1126 14:32:45.788750 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:45 crc kubenswrapper[4695]: I1126 14:32:45.789271 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:45 crc kubenswrapper[4695]: I1126 14:32:45.837426 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:46 crc kubenswrapper[4695]: I1126 14:32:46.177827 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:46 crc kubenswrapper[4695]: I1126 14:32:46.221454 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-982gv"] Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.021909 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4c7q6/must-gather-pxx9n"] Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.024208 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.025929 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4c7q6"/"kube-root-ca.crt" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.030275 4695 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4c7q6"/"default-dockercfg-jvmcr" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.030274 4695 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4c7q6"/"openshift-service-ca.crt" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.045241 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4c7q6/must-gather-pxx9n"] Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.199524 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfhsj\" (UniqueName: \"kubernetes.io/projected/73e4ac97-5cec-42a6-9468-74f019b091a6-kube-api-access-hfhsj\") pod \"must-gather-pxx9n\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.199654 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73e4ac97-5cec-42a6-9468-74f019b091a6-must-gather-output\") pod \"must-gather-pxx9n\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.301519 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73e4ac97-5cec-42a6-9468-74f019b091a6-must-gather-output\") pod \"must-gather-pxx9n\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.301627 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfhsj\" (UniqueName: \"kubernetes.io/projected/73e4ac97-5cec-42a6-9468-74f019b091a6-kube-api-access-hfhsj\") pod \"must-gather-pxx9n\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.302177 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73e4ac97-5cec-42a6-9468-74f019b091a6-must-gather-output\") pod \"must-gather-pxx9n\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.322965 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfhsj\" (UniqueName: \"kubernetes.io/projected/73e4ac97-5cec-42a6-9468-74f019b091a6-kube-api-access-hfhsj\") pod \"must-gather-pxx9n\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.340887 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:32:47 crc kubenswrapper[4695]: W1126 14:32:47.798297 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod73e4ac97_5cec_42a6_9468_74f019b091a6.slice/crio-5717a74bb6ebd3a663789a6e36496c7b7a86bfe6ec731fdf25382e353e1b28a4 WatchSource:0}: Error finding container 5717a74bb6ebd3a663789a6e36496c7b7a86bfe6ec731fdf25382e353e1b28a4: Status 404 returned error can't find the container with id 5717a74bb6ebd3a663789a6e36496c7b7a86bfe6ec731fdf25382e353e1b28a4 Nov 26 14:32:47 crc kubenswrapper[4695]: I1126 14:32:47.800886 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4c7q6/must-gather-pxx9n"] Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.136010 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-982gv" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="registry-server" containerID="cri-o://c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc" gracePeriod=2 Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.136128 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" event={"ID":"73e4ac97-5cec-42a6-9468-74f019b091a6","Type":"ContainerStarted","Data":"e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada"} Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.136158 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" event={"ID":"73e4ac97-5cec-42a6-9468-74f019b091a6","Type":"ContainerStarted","Data":"5717a74bb6ebd3a663789a6e36496c7b7a86bfe6ec731fdf25382e353e1b28a4"} Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.643568 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.734384 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-catalog-content\") pod \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.734539 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngpjw\" (UniqueName: \"kubernetes.io/projected/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-kube-api-access-ngpjw\") pod \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.734566 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-utilities\") pod \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\" (UID: \"3bd9fefd-3f8a-4ec7-9407-39996d6ac590\") " Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.736336 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-utilities" (OuterVolumeSpecName: "utilities") pod "3bd9fefd-3f8a-4ec7-9407-39996d6ac590" (UID: "3bd9fefd-3f8a-4ec7-9407-39996d6ac590"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.754180 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-kube-api-access-ngpjw" (OuterVolumeSpecName: "kube-api-access-ngpjw") pod "3bd9fefd-3f8a-4ec7-9407-39996d6ac590" (UID: "3bd9fefd-3f8a-4ec7-9407-39996d6ac590"). InnerVolumeSpecName "kube-api-access-ngpjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.837503 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngpjw\" (UniqueName: \"kubernetes.io/projected/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-kube-api-access-ngpjw\") on node \"crc\" DevicePath \"\"" Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.837537 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.919642 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bd9fefd-3f8a-4ec7-9407-39996d6ac590" (UID: "3bd9fefd-3f8a-4ec7-9407-39996d6ac590"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:32:48 crc kubenswrapper[4695]: I1126 14:32:48.938871 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd9fefd-3f8a-4ec7-9407-39996d6ac590-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.144420 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" event={"ID":"73e4ac97-5cec-42a6-9468-74f019b091a6","Type":"ContainerStarted","Data":"823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022"} Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.146993 4695 generic.go:334] "Generic (PLEG): container finished" podID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerID="c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc" exitCode=0 Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.147033 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982gv" event={"ID":"3bd9fefd-3f8a-4ec7-9407-39996d6ac590","Type":"ContainerDied","Data":"c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc"} Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.147061 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-982gv" event={"ID":"3bd9fefd-3f8a-4ec7-9407-39996d6ac590","Type":"ContainerDied","Data":"1b6fc876f9faed9ab6d29493faca25f59984039e9fd07c2c2495ddc281fa199c"} Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.147077 4695 scope.go:117] "RemoveContainer" containerID="c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.147133 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-982gv" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.170550 4695 scope.go:117] "RemoveContainer" containerID="b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.180764 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" podStartSLOduration=3.180743268 podStartE2EDuration="3.180743268s" podCreationTimestamp="2025-11-26 14:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-26 14:32:49.178469306 +0000 UTC m=+4152.814294388" watchObservedRunningTime="2025-11-26 14:32:49.180743268 +0000 UTC m=+4152.816568350" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.210834 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-982gv"] Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.221884 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-982gv"] Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.357498 4695 scope.go:117] "RemoveContainer" containerID="822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.410283 4695 scope.go:117] "RemoveContainer" containerID="c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc" Nov 26 14:32:49 crc kubenswrapper[4695]: E1126 14:32:49.410880 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc\": container with ID starting with c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc not found: ID does not exist" containerID="c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.410918 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc"} err="failed to get container status \"c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc\": rpc error: code = NotFound desc = could not find container \"c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc\": container with ID starting with c960eea6a6f61d033708d5915064eb102dca2633099bb117a18ea4c217daccfc not found: ID does not exist" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.410944 4695 scope.go:117] "RemoveContainer" containerID="b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5" Nov 26 14:32:49 crc kubenswrapper[4695]: E1126 14:32:49.411446 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5\": container with ID starting with b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5 not found: ID does not exist" containerID="b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.411466 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5"} err="failed to get container status \"b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5\": rpc error: code = NotFound desc = could not find container \"b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5\": container with ID starting with b0b5d27fc1cc62a404eecffecad4185c7430fb6e309ed7c1bff60d91d3b5c3e5 not found: ID does not exist" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.411479 4695 scope.go:117] "RemoveContainer" containerID="822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7" Nov 26 14:32:49 crc kubenswrapper[4695]: E1126 14:32:49.411742 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7\": container with ID starting with 822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7 not found: ID does not exist" containerID="822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7" Nov 26 14:32:49 crc kubenswrapper[4695]: I1126 14:32:49.411773 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7"} err="failed to get container status \"822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7\": rpc error: code = NotFound desc = could not find container \"822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7\": container with ID starting with 822d671f2f33236bb74b2a49f4495d8d6523e211745b9880b180cd5bc43afea7 not found: ID does not exist" Nov 26 14:32:51 crc kubenswrapper[4695]: I1126 14:32:51.171714 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" path="/var/lib/kubelet/pods/3bd9fefd-3f8a-4ec7-9407-39996d6ac590/volumes" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.125124 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-rrm4z"] Nov 26 14:32:52 crc kubenswrapper[4695]: E1126 14:32:52.125527 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="extract-content" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.125548 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="extract-content" Nov 26 14:32:52 crc kubenswrapper[4695]: E1126 14:32:52.125569 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="registry-server" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.125577 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="registry-server" Nov 26 14:32:52 crc kubenswrapper[4695]: E1126 14:32:52.125589 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="extract-utilities" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.125595 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="extract-utilities" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.125788 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd9fefd-3f8a-4ec7-9407-39996d6ac590" containerName="registry-server" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.126393 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.298021 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbvbd\" (UniqueName: \"kubernetes.io/projected/b70edaf7-641b-490c-8041-1ef3d01ab5a0-kube-api-access-qbvbd\") pod \"crc-debug-rrm4z\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.299252 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70edaf7-641b-490c-8041-1ef3d01ab5a0-host\") pod \"crc-debug-rrm4z\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.404437 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70edaf7-641b-490c-8041-1ef3d01ab5a0-host\") pod \"crc-debug-rrm4z\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.404594 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbvbd\" (UniqueName: \"kubernetes.io/projected/b70edaf7-641b-490c-8041-1ef3d01ab5a0-kube-api-access-qbvbd\") pod \"crc-debug-rrm4z\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.404722 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70edaf7-641b-490c-8041-1ef3d01ab5a0-host\") pod \"crc-debug-rrm4z\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.424563 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbvbd\" (UniqueName: \"kubernetes.io/projected/b70edaf7-641b-490c-8041-1ef3d01ab5a0-kube-api-access-qbvbd\") pod \"crc-debug-rrm4z\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: I1126 14:32:52.446112 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:32:52 crc kubenswrapper[4695]: W1126 14:32:52.491537 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb70edaf7_641b_490c_8041_1ef3d01ab5a0.slice/crio-21ef9ceda2fbdb572a011d1ed1f6b48ff8b770e8c16bf564b0dbdb35b03c4cfc WatchSource:0}: Error finding container 21ef9ceda2fbdb572a011d1ed1f6b48ff8b770e8c16bf564b0dbdb35b03c4cfc: Status 404 returned error can't find the container with id 21ef9ceda2fbdb572a011d1ed1f6b48ff8b770e8c16bf564b0dbdb35b03c4cfc Nov 26 14:32:53 crc kubenswrapper[4695]: I1126 14:32:53.188717 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" event={"ID":"b70edaf7-641b-490c-8041-1ef3d01ab5a0","Type":"ContainerStarted","Data":"b035cde49ab013cf1095a4fa2c5b8d820cef7300fde352c04098100023223115"} Nov 26 14:32:53 crc kubenswrapper[4695]: I1126 14:32:53.189324 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" event={"ID":"b70edaf7-641b-490c-8041-1ef3d01ab5a0","Type":"ContainerStarted","Data":"21ef9ceda2fbdb572a011d1ed1f6b48ff8b770e8c16bf564b0dbdb35b03c4cfc"} Nov 26 14:33:06 crc kubenswrapper[4695]: I1126 14:33:06.397159 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:33:06 crc kubenswrapper[4695]: I1126 14:33:06.397767 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:33:29 crc kubenswrapper[4695]: I1126 14:33:29.506965 4695 generic.go:334] "Generic (PLEG): container finished" podID="b70edaf7-641b-490c-8041-1ef3d01ab5a0" containerID="b035cde49ab013cf1095a4fa2c5b8d820cef7300fde352c04098100023223115" exitCode=0 Nov 26 14:33:29 crc kubenswrapper[4695]: I1126 14:33:29.507081 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" event={"ID":"b70edaf7-641b-490c-8041-1ef3d01ab5a0","Type":"ContainerDied","Data":"b035cde49ab013cf1095a4fa2c5b8d820cef7300fde352c04098100023223115"} Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.626025 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.677577 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-rrm4z"] Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.691795 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-rrm4z"] Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.784505 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70edaf7-641b-490c-8041-1ef3d01ab5a0-host\") pod \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.784644 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbvbd\" (UniqueName: \"kubernetes.io/projected/b70edaf7-641b-490c-8041-1ef3d01ab5a0-kube-api-access-qbvbd\") pod \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\" (UID: \"b70edaf7-641b-490c-8041-1ef3d01ab5a0\") " Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.785099 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b70edaf7-641b-490c-8041-1ef3d01ab5a0-host" (OuterVolumeSpecName: "host") pod "b70edaf7-641b-490c-8041-1ef3d01ab5a0" (UID: "b70edaf7-641b-490c-8041-1ef3d01ab5a0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.785480 4695 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b70edaf7-641b-490c-8041-1ef3d01ab5a0-host\") on node \"crc\" DevicePath \"\"" Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.792532 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b70edaf7-641b-490c-8041-1ef3d01ab5a0-kube-api-access-qbvbd" (OuterVolumeSpecName: "kube-api-access-qbvbd") pod "b70edaf7-641b-490c-8041-1ef3d01ab5a0" (UID: "b70edaf7-641b-490c-8041-1ef3d01ab5a0"). InnerVolumeSpecName "kube-api-access-qbvbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:33:30 crc kubenswrapper[4695]: I1126 14:33:30.887958 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbvbd\" (UniqueName: \"kubernetes.io/projected/b70edaf7-641b-490c-8041-1ef3d01ab5a0-kube-api-access-qbvbd\") on node \"crc\" DevicePath \"\"" Nov 26 14:33:31 crc kubenswrapper[4695]: I1126 14:33:31.174016 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b70edaf7-641b-490c-8041-1ef3d01ab5a0" path="/var/lib/kubelet/pods/b70edaf7-641b-490c-8041-1ef3d01ab5a0/volumes" Nov 26 14:33:31 crc kubenswrapper[4695]: I1126 14:33:31.529807 4695 scope.go:117] "RemoveContainer" containerID="b035cde49ab013cf1095a4fa2c5b8d820cef7300fde352c04098100023223115" Nov 26 14:33:31 crc kubenswrapper[4695]: I1126 14:33:31.529823 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-rrm4z" Nov 26 14:33:31 crc kubenswrapper[4695]: I1126 14:33:31.832566 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-m7drm"] Nov 26 14:33:31 crc kubenswrapper[4695]: E1126 14:33:31.833063 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b70edaf7-641b-490c-8041-1ef3d01ab5a0" containerName="container-00" Nov 26 14:33:31 crc kubenswrapper[4695]: I1126 14:33:31.833084 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="b70edaf7-641b-490c-8041-1ef3d01ab5a0" containerName="container-00" Nov 26 14:33:31 crc kubenswrapper[4695]: I1126 14:33:31.833396 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="b70edaf7-641b-490c-8041-1ef3d01ab5a0" containerName="container-00" Nov 26 14:33:31 crc kubenswrapper[4695]: I1126 14:33:31.834197 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.010936 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-host\") pod \"crc-debug-m7drm\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.011440 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjdjn\" (UniqueName: \"kubernetes.io/projected/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-kube-api-access-bjdjn\") pod \"crc-debug-m7drm\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.113803 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjdjn\" (UniqueName: \"kubernetes.io/projected/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-kube-api-access-bjdjn\") pod \"crc-debug-m7drm\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.113909 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-host\") pod \"crc-debug-m7drm\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.114034 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-host\") pod \"crc-debug-m7drm\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.245216 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjdjn\" (UniqueName: \"kubernetes.io/projected/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-kube-api-access-bjdjn\") pod \"crc-debug-m7drm\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.325711 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:32 crc kubenswrapper[4695]: I1126 14:33:32.540551 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/crc-debug-m7drm" event={"ID":"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb","Type":"ContainerStarted","Data":"3206a24c2fba5ad069b2e7337073c227b7a8e6f81cf75523b59491b446d0c06d"} Nov 26 14:33:33 crc kubenswrapper[4695]: I1126 14:33:33.550741 4695 generic.go:334] "Generic (PLEG): container finished" podID="32aa9c60-8a27-4dd0-a043-bc01b4aed3eb" containerID="d0aeabd4f4d298571ecca9fe141155715ada75653a49dc8838800e63a0c4dba0" exitCode=0 Nov 26 14:33:33 crc kubenswrapper[4695]: I1126 14:33:33.550848 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/crc-debug-m7drm" event={"ID":"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb","Type":"ContainerDied","Data":"d0aeabd4f4d298571ecca9fe141155715ada75653a49dc8838800e63a0c4dba0"} Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.113501 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-m7drm"] Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.121996 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-m7drm"] Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.686682 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.862737 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-host\") pod \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.862810 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-host" (OuterVolumeSpecName: "host") pod "32aa9c60-8a27-4dd0-a043-bc01b4aed3eb" (UID: "32aa9c60-8a27-4dd0-a043-bc01b4aed3eb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.862900 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjdjn\" (UniqueName: \"kubernetes.io/projected/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-kube-api-access-bjdjn\") pod \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\" (UID: \"32aa9c60-8a27-4dd0-a043-bc01b4aed3eb\") " Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.863418 4695 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-host\") on node \"crc\" DevicePath \"\"" Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.871636 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-kube-api-access-bjdjn" (OuterVolumeSpecName: "kube-api-access-bjdjn") pod "32aa9c60-8a27-4dd0-a043-bc01b4aed3eb" (UID: "32aa9c60-8a27-4dd0-a043-bc01b4aed3eb"). InnerVolumeSpecName "kube-api-access-bjdjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:33:34 crc kubenswrapper[4695]: I1126 14:33:34.965667 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bjdjn\" (UniqueName: \"kubernetes.io/projected/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb-kube-api-access-bjdjn\") on node \"crc\" DevicePath \"\"" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.173336 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32aa9c60-8a27-4dd0-a043-bc01b4aed3eb" path="/var/lib/kubelet/pods/32aa9c60-8a27-4dd0-a043-bc01b4aed3eb/volumes" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.282681 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-fd69g"] Nov 26 14:33:35 crc kubenswrapper[4695]: E1126 14:33:35.283523 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32aa9c60-8a27-4dd0-a043-bc01b4aed3eb" containerName="container-00" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.283542 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="32aa9c60-8a27-4dd0-a043-bc01b4aed3eb" containerName="container-00" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.283750 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="32aa9c60-8a27-4dd0-a043-bc01b4aed3eb" containerName="container-00" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.284303 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.475484 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68lvj\" (UniqueName: \"kubernetes.io/projected/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-kube-api-access-68lvj\") pod \"crc-debug-fd69g\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.475630 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-host\") pod \"crc-debug-fd69g\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.577640 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68lvj\" (UniqueName: \"kubernetes.io/projected/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-kube-api-access-68lvj\") pod \"crc-debug-fd69g\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.577739 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-host\") pod \"crc-debug-fd69g\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.577944 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-host\") pod \"crc-debug-fd69g\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.579952 4695 scope.go:117] "RemoveContainer" containerID="d0aeabd4f4d298571ecca9fe141155715ada75653a49dc8838800e63a0c4dba0" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.579979 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-m7drm" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.606777 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68lvj\" (UniqueName: \"kubernetes.io/projected/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-kube-api-access-68lvj\") pod \"crc-debug-fd69g\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: I1126 14:33:35.904841 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:35 crc kubenswrapper[4695]: W1126 14:33:35.936642 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33d8cacd_e0d2_4987_97cc_fe99bf8394b0.slice/crio-f710272024130baca6f9a429deca06695a17d8819f6af643685b46f12519a4cf WatchSource:0}: Error finding container f710272024130baca6f9a429deca06695a17d8819f6af643685b46f12519a4cf: Status 404 returned error can't find the container with id f710272024130baca6f9a429deca06695a17d8819f6af643685b46f12519a4cf Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.396624 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.397097 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.397163 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.398040 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef6fb8c0ab2c4ea2801594a2c8b9b77a9947483ff7104624a94de9342fec41f3"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.398101 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://ef6fb8c0ab2c4ea2801594a2c8b9b77a9947483ff7104624a94de9342fec41f3" gracePeriod=600 Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.597624 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="ef6fb8c0ab2c4ea2801594a2c8b9b77a9947483ff7104624a94de9342fec41f3" exitCode=0 Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.597876 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"ef6fb8c0ab2c4ea2801594a2c8b9b77a9947483ff7104624a94de9342fec41f3"} Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.598454 4695 scope.go:117] "RemoveContainer" containerID="f642aacc0eac23afc981f6d1b7df875b5445d1815e78f3a784dade7b13b3ce29" Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.620886 4695 generic.go:334] "Generic (PLEG): container finished" podID="33d8cacd-e0d2-4987-97cc-fe99bf8394b0" containerID="29c4c66c9ca32a489cf52f0a9d5756667bd3938546a57567a8f21110eaff85cd" exitCode=0 Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.620937 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/crc-debug-fd69g" event={"ID":"33d8cacd-e0d2-4987-97cc-fe99bf8394b0","Type":"ContainerDied","Data":"29c4c66c9ca32a489cf52f0a9d5756667bd3938546a57567a8f21110eaff85cd"} Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.620967 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/crc-debug-fd69g" event={"ID":"33d8cacd-e0d2-4987-97cc-fe99bf8394b0","Type":"ContainerStarted","Data":"f710272024130baca6f9a429deca06695a17d8819f6af643685b46f12519a4cf"} Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.673404 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-fd69g"] Nov 26 14:33:36 crc kubenswrapper[4695]: I1126 14:33:36.683945 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4c7q6/crc-debug-fd69g"] Nov 26 14:33:37 crc kubenswrapper[4695]: I1126 14:33:37.648367 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec"} Nov 26 14:33:37 crc kubenswrapper[4695]: I1126 14:33:37.765792 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:37 crc kubenswrapper[4695]: I1126 14:33:37.920318 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-host\") pod \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " Nov 26 14:33:37 crc kubenswrapper[4695]: I1126 14:33:37.920388 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68lvj\" (UniqueName: \"kubernetes.io/projected/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-kube-api-access-68lvj\") pod \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\" (UID: \"33d8cacd-e0d2-4987-97cc-fe99bf8394b0\") " Nov 26 14:33:37 crc kubenswrapper[4695]: I1126 14:33:37.920456 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-host" (OuterVolumeSpecName: "host") pod "33d8cacd-e0d2-4987-97cc-fe99bf8394b0" (UID: "33d8cacd-e0d2-4987-97cc-fe99bf8394b0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 26 14:33:37 crc kubenswrapper[4695]: I1126 14:33:37.920980 4695 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-host\") on node \"crc\" DevicePath \"\"" Nov 26 14:33:37 crc kubenswrapper[4695]: I1126 14:33:37.928808 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-kube-api-access-68lvj" (OuterVolumeSpecName: "kube-api-access-68lvj") pod "33d8cacd-e0d2-4987-97cc-fe99bf8394b0" (UID: "33d8cacd-e0d2-4987-97cc-fe99bf8394b0"). InnerVolumeSpecName "kube-api-access-68lvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:33:38 crc kubenswrapper[4695]: I1126 14:33:38.022873 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68lvj\" (UniqueName: \"kubernetes.io/projected/33d8cacd-e0d2-4987-97cc-fe99bf8394b0-kube-api-access-68lvj\") on node \"crc\" DevicePath \"\"" Nov 26 14:33:38 crc kubenswrapper[4695]: I1126 14:33:38.658758 4695 scope.go:117] "RemoveContainer" containerID="29c4c66c9ca32a489cf52f0a9d5756667bd3938546a57567a8f21110eaff85cd" Nov 26 14:33:38 crc kubenswrapper[4695]: I1126 14:33:38.658829 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/crc-debug-fd69g" Nov 26 14:33:39 crc kubenswrapper[4695]: I1126 14:33:39.173771 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d8cacd-e0d2-4987-97cc-fe99bf8394b0" path="/var/lib/kubelet/pods/33d8cacd-e0d2-4987-97cc-fe99bf8394b0/volumes" Nov 26 14:33:56 crc kubenswrapper[4695]: I1126 14:33:56.908078 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-766c89cd-88742_4c9406e4-9d5c-429a-94b4-6da4283c3462/barbican-api/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.055643 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-766c89cd-88742_4c9406e4-9d5c-429a-94b4-6da4283c3462/barbican-api-log/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.101587 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f5fb9ccb-sctd5_26c9568b-95d0-4b6d-8bed-6da941279a98/barbican-keystone-listener/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.156023 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56f5fb9ccb-sctd5_26c9568b-95d0-4b6d-8bed-6da941279a98/barbican-keystone-listener-log/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.245782 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc8f95cf-vpnct_71ec1963-a024-4fc4-a747-3c2ee03603a4/barbican-worker/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.310889 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dc8f95cf-vpnct_71ec1963-a024-4fc4-a747-3c2ee03603a4/barbican-worker-log/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.503572 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bdl5x_6b85ca84-0932-4ed9-bcc9-883e52f07315/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.519897 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/ceilometer-central-agent/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.601813 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/ceilometer-notification-agent/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.676840 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/proxy-httpd/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.711382 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_0b694fa3-bda0-4522-bc11-61c47db527af/sg-core/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.867025 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_98045621-506a-4a2b-a135-ed37abdf8de5/cinder-api/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.882907 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_98045621-506a-4a2b-a135-ed37abdf8de5/cinder-api-log/0.log" Nov 26 14:33:57 crc kubenswrapper[4695]: I1126 14:33:57.983252 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_65648e25-0d32-4537-9e31-e9ca87f02aea/cinder-scheduler/0.log" Nov 26 14:33:58 crc kubenswrapper[4695]: I1126 14:33:58.091152 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_65648e25-0d32-4537-9e31-e9ca87f02aea/probe/0.log" Nov 26 14:33:58 crc kubenswrapper[4695]: I1126 14:33:58.167433 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lt2v2_960d575b-5f75-45a2-8dbe-dd185d9dc0a0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:33:58 crc kubenswrapper[4695]: I1126 14:33:58.284470 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-fn5fc_168bfff7-248e-4717-beac-8f7986a5d31e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:33:58 crc kubenswrapper[4695]: I1126 14:33:58.357644 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-pblxz_86275742-143a-41e5-8029-aa251663c12e/init/0.log" Nov 26 14:33:58 crc kubenswrapper[4695]: I1126 14:33:58.500539 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-pblxz_86275742-143a-41e5-8029-aa251663c12e/init/0.log" Nov 26 14:33:58 crc kubenswrapper[4695]: I1126 14:33:58.591645 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-4rf24_2d8dd2e9-86e1-4281-ba9b-16f1e33b8b41/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:33:58 crc kubenswrapper[4695]: I1126 14:33:58.593069 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-pblxz_86275742-143a-41e5-8029-aa251663c12e/dnsmasq-dns/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.013467 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5a16aeb-231a-4012-9aed-ab91a1fab41e/glance-log/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.040659 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f5a16aeb-231a-4012-9aed-ab91a1fab41e/glance-httpd/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.200527 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d83aab2a-dcf3-44a5-9616-e19d698ea43d/glance-httpd/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.330786 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7d4c9c9dbd-9bbnw_3ca1545d-04c5-45f8-8738-f662db77ffba/horizon/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.458289 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d83aab2a-dcf3-44a5-9616-e19d698ea43d/glance-log/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.688417 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2jszc_509c6c88-4720-4dcc-b9fc-e50ef40c4a6f/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.842758 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-ljn7l_87321f9e-8ed5-40a0-bf49-f5e8c63ba2e5/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:33:59 crc kubenswrapper[4695]: I1126 14:33:59.897209 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-7d4c9c9dbd-9bbnw_3ca1545d-04c5-45f8-8738-f662db77ffba/horizon-log/0.log" Nov 26 14:34:00 crc kubenswrapper[4695]: I1126 14:34:00.096813 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29402761-sd4pc_ab5969ee-b42f-466b-9087-adf2da1d7785/keystone-cron/0.log" Nov 26 14:34:00 crc kubenswrapper[4695]: I1126 14:34:00.174766 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-776844bc66-7hpvs_4d6ef0ca-2f43-400b-a10d-16d7e0ad3f64/keystone-api/0.log" Nov 26 14:34:00 crc kubenswrapper[4695]: I1126 14:34:00.704689 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8d9e632f-d8bf-43f3-bd33-5d0d7a43d08f/kube-state-metrics/0.log" Nov 26 14:34:00 crc kubenswrapper[4695]: I1126 14:34:00.767264 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gjhg7_76b33613-bb4c-4e62-9574-4372603edc01/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:01 crc kubenswrapper[4695]: I1126 14:34:01.052882 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcbb85b97-9vnqg_9c6bc4a2-d4c9-4bdc-a576-03bf4101b606/neutron-httpd/0.log" Nov 26 14:34:01 crc kubenswrapper[4695]: I1126 14:34:01.096809 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-bcbb85b97-9vnqg_9c6bc4a2-d4c9-4bdc-a576-03bf4101b606/neutron-api/0.log" Nov 26 14:34:01 crc kubenswrapper[4695]: I1126 14:34:01.172650 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-9nsgm_7ac47dbe-143a-49da-80b2-e60fc44ebaf4/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:01 crc kubenswrapper[4695]: I1126 14:34:01.722255 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c2fff62-d355-4448-a2f8-a2d9f5c13e9a/nova-api-log/0.log" Nov 26 14:34:01 crc kubenswrapper[4695]: I1126 14:34:01.731722 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5a3fc356-06d7-4e26-bcfb-c610dc6e02be/nova-cell0-conductor-conductor/0.log" Nov 26 14:34:02 crc kubenswrapper[4695]: I1126 14:34:02.094085 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_f8537527-a9d6-41b1-b7ad-a281ee216c45/nova-cell1-conductor-conductor/0.log" Nov 26 14:34:02 crc kubenswrapper[4695]: I1126 14:34:02.121830 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_0ddfca26-214e-472a-90ca-e0088717125e/nova-cell1-novncproxy-novncproxy/0.log" Nov 26 14:34:02 crc kubenswrapper[4695]: I1126 14:34:02.125719 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_6c2fff62-d355-4448-a2f8-a2d9f5c13e9a/nova-api-api/0.log" Nov 26 14:34:02 crc kubenswrapper[4695]: I1126 14:34:02.347404 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-2jzr6_9f43b614-f241-4689-b15b-26bdf3d6e72d/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:02 crc kubenswrapper[4695]: I1126 14:34:02.418656 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec2ef622-e87b-4dde-a1ca-81496cfd3562/nova-metadata-log/0.log" Nov 26 14:34:02 crc kubenswrapper[4695]: I1126 14:34:02.743089 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_4d67e67d-ff4c-46e4-b0af-9eb1c017bf46/nova-scheduler-scheduler/0.log" Nov 26 14:34:02 crc kubenswrapper[4695]: I1126 14:34:02.908148 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b02f07d7-7406-4602-b166-911408fe8bf0/mysql-bootstrap/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.083077 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b02f07d7-7406-4602-b166-911408fe8bf0/mysql-bootstrap/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.125658 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b02f07d7-7406-4602-b166-911408fe8bf0/galera/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.298266 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_82b6b21a-6ed0-43d7-9763-684eca59aa29/mysql-bootstrap/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.513926 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_82b6b21a-6ed0-43d7-9763-684eca59aa29/mysql-bootstrap/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.522495 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_82b6b21a-6ed0-43d7-9763-684eca59aa29/galera/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.673202 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_d282a7dc-4e06-4e82-8b99-ce6f8416c5cc/openstackclient/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.777418 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5cm6x_22e7a6af-7195-45fd-979b-4af39f3cfb62/openstack-network-exporter/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.786845 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_ec2ef622-e87b-4dde-a1ca-81496cfd3562/nova-metadata-metadata/0.log" Nov 26 14:34:03 crc kubenswrapper[4695]: I1126 14:34:03.912628 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovsdb-server-init/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.139478 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovsdb-server/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.149222 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovsdb-server-init/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.157635 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rtt8r_35f623f4-096c-4ac0-9b93-b489fda7cf09/ovs-vswitchd/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.357493 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-zvx8d_9f98833b-dbaf-42bc-a424-8094e025ce87/ovn-controller/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.394030 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-l77jw_72410dcc-406c-43d5-bc58-320471e9df04/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.557163 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26637d33-5a10-4201-b728-2a250279651b/openstack-network-exporter/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.626126 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_26637d33-5a10-4201-b728-2a250279651b/ovn-northd/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.741868 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8/openstack-network-exporter/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.817035 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_2a4fb1e7-a0a8-4bbb-8a3f-52da203e00d8/ovsdbserver-nb/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.824428 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b28b52fd-d5e1-44b4-af26-9fa98d731335/openstack-network-exporter/0.log" Nov 26 14:34:04 crc kubenswrapper[4695]: I1126 14:34:04.945427 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b28b52fd-d5e1-44b4-af26-9fa98d731335/ovsdbserver-sb/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.141013 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-666cd5b87b-cmnl9_2959a379-6a03-4c8d-b022-47e69ac7636d/placement-api/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.175378 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-666cd5b87b-cmnl9_2959a379-6a03-4c8d-b022-47e69ac7636d/placement-log/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.257452 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e7335d8e-0d9a-4532-9f5b-d91cafe38ca7/setup-container/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.502025 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e7335d8e-0d9a-4532-9f5b-d91cafe38ca7/setup-container/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.507514 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e7335d8e-0d9a-4532-9f5b-d91cafe38ca7/rabbitmq/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.537956 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_51da5818-5d05-4f99-84a7-93eae660a8a7/setup-container/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.809412 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_51da5818-5d05-4f99-84a7-93eae660a8a7/rabbitmq/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.827068 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_51da5818-5d05-4f99-84a7-93eae660a8a7/setup-container/0.log" Nov 26 14:34:05 crc kubenswrapper[4695]: I1126 14:34:05.864653 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-v5bdh_fbe77fc2-bcee-446d-a02c-5a992ab5dcae/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:06 crc kubenswrapper[4695]: I1126 14:34:06.023117 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-2pppt_6d8a0921-3704-485a-8ee9-c6250fd2d59e/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:06 crc kubenswrapper[4695]: I1126 14:34:06.251987 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-2sq78_d8407b26-4534-4252-bccf-4e82cea0cd6e/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:06 crc kubenswrapper[4695]: I1126 14:34:06.419659 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-2twlt_d77b20d3-631b-481b-b480-226968d0b73c/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:06 crc kubenswrapper[4695]: I1126 14:34:06.623787 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-76hkl_08ef121f-97dc-4e9e-a466-438d25f2391e/ssh-known-hosts-edpm-deployment/0.log" Nov 26 14:34:06 crc kubenswrapper[4695]: I1126 14:34:06.855938 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7d6d6689f5-n925b_0cb53a75-c198-433b-b342-7acf8ed7dc0c/proxy-server/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.013633 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7d6d6689f5-n925b_0cb53a75-c198-433b-b342-7acf8ed7dc0c/proxy-httpd/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.032577 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-n2464_bc139d50-e9df-4c3c-9ae0-cf5a4c7f0205/swift-ring-rebalance/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.210829 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-auditor/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.248986 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-replicator/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.273082 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-reaper/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.350475 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/account-server/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.438743 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-auditor/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.495547 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-server/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.507961 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-replicator/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.556307 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/container-updater/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.636898 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-auditor/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.717134 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-expirer/0.log" Nov 26 14:34:07 crc kubenswrapper[4695]: I1126 14:34:07.723873 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-replicator/0.log" Nov 26 14:34:08 crc kubenswrapper[4695]: I1126 14:34:08.411237 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-server/0.log" Nov 26 14:34:08 crc kubenswrapper[4695]: I1126 14:34:08.412535 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/rsync/0.log" Nov 26 14:34:08 crc kubenswrapper[4695]: I1126 14:34:08.412568 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/object-updater/0.log" Nov 26 14:34:08 crc kubenswrapper[4695]: I1126 14:34:08.429456 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b77b4e90-5d1a-4724-a57f-2ff4a394d434/swift-recon-cron/0.log" Nov 26 14:34:08 crc kubenswrapper[4695]: I1126 14:34:08.665171 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-g8dhs_0fdaed7b-61f1-4840-88c7-f997a45a27ca/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:08 crc kubenswrapper[4695]: I1126 14:34:08.690566 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_d7930b08-66ca-496a-94a1-b68e2fe60177/tempest-tests-tempest-tests-runner/0.log" Nov 26 14:34:08 crc kubenswrapper[4695]: I1126 14:34:08.849956 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_5e2a3eda-3b39-4953-b576-ae3652af0195/test-operator-logs-container/0.log" Nov 26 14:34:09 crc kubenswrapper[4695]: I1126 14:34:09.000248 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jbnll_354489f4-e2ae-4a52-8708-5c495c729662/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 26 14:34:18 crc kubenswrapper[4695]: I1126 14:34:18.232974 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_95ed69b1-d83c-4967-a627-6e52dc6da41b/memcached/0.log" Nov 26 14:34:35 crc kubenswrapper[4695]: I1126 14:34:35.693480 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/util/0.log" Nov 26 14:34:35 crc kubenswrapper[4695]: I1126 14:34:35.899305 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/pull/0.log" Nov 26 14:34:35 crc kubenswrapper[4695]: I1126 14:34:35.940035 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/util/0.log" Nov 26 14:34:35 crc kubenswrapper[4695]: I1126 14:34:35.941109 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/pull/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.245221 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/util/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.276828 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/pull/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.309775 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_067fead7bf8fa950f94434abb02629a8de49d5c32afdead0cd1da0fff5gbz74_74319b7a-3ea1-4750-8c18-4e4578472276/extract/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.437366 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-xtp7h_a487eafc-c65d-4ce9-b801-e489882a4dfa/kube-rbac-proxy/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.498721 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b64f4fb85-xtp7h_a487eafc-c65d-4ce9-b801-e489882a4dfa/manager/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.585965 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-qbchs_868435aa-9f77-46df-af13-ae24b16dee14/kube-rbac-proxy/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.692210 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6b7f75547b-qbchs_868435aa-9f77-46df-af13-ae24b16dee14/manager/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.753566 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-twcbz_dbf58d06-6729-4a3e-8682-641649f1ecd2/kube-rbac-proxy/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.811223 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-955677c94-twcbz_dbf58d06-6729-4a3e-8682-641649f1ecd2/manager/0.log" Nov 26 14:34:36 crc kubenswrapper[4695]: I1126 14:34:36.930398 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-6t784_490f30b9-4b79-4a35-a77d-44c8a90b5dcf/kube-rbac-proxy/0.log" Nov 26 14:34:37 crc kubenswrapper[4695]: I1126 14:34:37.033092 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-589cbd6b5b-6t784_490f30b9-4b79-4a35-a77d-44c8a90b5dcf/manager/0.log" Nov 26 14:34:37 crc kubenswrapper[4695]: I1126 14:34:37.602751 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-s22qh_f76ff94a-b97d-4bbc-bc03-3b8df6d35095/kube-rbac-proxy/0.log" Nov 26 14:34:37 crc kubenswrapper[4695]: I1126 14:34:37.737933 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5b77f656f-s22qh_f76ff94a-b97d-4bbc-bc03-3b8df6d35095/manager/0.log" Nov 26 14:34:37 crc kubenswrapper[4695]: I1126 14:34:37.760905 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-mljjd_d71ecb02-382d-4fde-b349-343c97f769fd/kube-rbac-proxy/0.log" Nov 26 14:34:37 crc kubenswrapper[4695]: I1126 14:34:37.823598 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5d494799bf-mljjd_d71ecb02-382d-4fde-b349-343c97f769fd/manager/0.log" Nov 26 14:34:37 crc kubenswrapper[4695]: I1126 14:34:37.926956 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-v2jnr_c66a73b5-1103-497f-87a6-70d964111fc9/kube-rbac-proxy/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.123879 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-v2jnr_c66a73b5-1103-497f-87a6-70d964111fc9/manager/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.152534 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-vk52w_d84948ad-d0e9-4d86-97a7-1a0d9e13d858/manager/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.152735 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-67cb4dc6d4-vk52w_d84948ad-d0e9-4d86-97a7-1a0d9e13d858/kube-rbac-proxy/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.296812 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ndnf7_8d102669-be66-4bc6-8328-3e7d8a66f4c1/kube-rbac-proxy/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.392550 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7b4567c7cf-ndnf7_8d102669-be66-4bc6-8328-3e7d8a66f4c1/manager/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.489404 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-n8tkz_fcca3fad-5da8-4242-894e-9dd5917f3828/kube-rbac-proxy/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.520485 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5d499bf58b-n8tkz_fcca3fad-5da8-4242-894e-9dd5917f3828/manager/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.591253 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-28gfw_e1127f2e-e8b5-4002-9f8b-7f3a286640ba/kube-rbac-proxy/0.log" Nov 26 14:34:38 crc kubenswrapper[4695]: I1126 14:34:38.686943 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-66f4dd4bc7-28gfw_e1127f2e-e8b5-4002-9f8b-7f3a286640ba/manager/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.409654 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-rbpf5_7e515c4b-ebc1-42fc-a3b9-406552e7f797/kube-rbac-proxy/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.562253 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-5qjxh_5c7bfa9c-0c31-4ece-915a-c4e4d37fadad/kube-rbac-proxy/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.619725 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-6fdcddb789-rbpf5_7e515c4b-ebc1-42fc-a3b9-406552e7f797/manager/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.653640 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-5qjxh_5c7bfa9c-0c31-4ece-915a-c4e4d37fadad/manager/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.794266 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-s9mbz_51be52bb-362c-4b52-9962-a5e6b3e9dddb/kube-rbac-proxy/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.834171 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-64cdc6ff96-s9mbz_51be52bb-362c-4b52-9962-a5e6b3e9dddb/manager/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.901657 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f_071300a7-9f99-4e3f-8fd7-ceabb7ba738d/kube-rbac-proxy/0.log" Nov 26 14:34:39 crc kubenswrapper[4695]: I1126 14:34:39.927395 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-5fcdb54b6b64f2f_071300a7-9f99-4e3f-8fd7-ceabb7ba738d/manager/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.436961 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-r56l2_281f751a-55f6-4753-8014-8e52bd983a45/registry-server/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.449903 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-78fd744894-tc5nj_301c123c-e342-4fda-b713-03954d29dd4a/operator/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.455932 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-chvvt_16e51188-65ad-4a0d-a571-5f02e38d68b6/kube-rbac-proxy/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.661697 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-j27xp_ba47d9b1-160d-40da-a691-db4b4e2557d5/kube-rbac-proxy/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.678734 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-56897c768d-chvvt_16e51188-65ad-4a0d-a571-5f02e38d68b6/manager/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.720256 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-57988cc5b5-j27xp_ba47d9b1-160d-40da-a691-db4b4e2557d5/manager/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.936238 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-z2pb5_375f27a9-421e-422e-baee-6d5ac575788a/operator/0.log" Nov 26 14:34:40 crc kubenswrapper[4695]: I1126 14:34:40.986522 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-t7sqd_b044c065-4ba3-4390-88c9-340e2fc1ba2f/kube-rbac-proxy/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.058128 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6bcd57bc9d-w2r7r_4c653020-2777-48e3-b06f-b33a61aabc36/manager/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.096292 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-d77b94747-t7sqd_b044c065-4ba3-4390-88c9-340e2fc1ba2f/manager/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.239888 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vdrzg_92f90071-d080-4579-9a87-aef8e8b760d3/kube-rbac-proxy/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.261647 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-vdrzg_92f90071-d080-4579-9a87-aef8e8b760d3/manager/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.286789 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-z6584_b87cee59-5442-46a0-b5d2-8467196ceedf/kube-rbac-proxy/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.368180 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cd6c7f4c8-z6584_b87cee59-5442-46a0-b5d2-8467196ceedf/manager/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.440104 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-jsmdt_9716d6f2-3c85-4d5d-a261-966d0e6d6dfc/manager/0.log" Nov 26 14:34:41 crc kubenswrapper[4695]: I1126 14:34:41.454481 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-656dcb59d4-jsmdt_9716d6f2-3c85-4d5d-a261-966d0e6d6dfc/kube-rbac-proxy/0.log" Nov 26 14:35:00 crc kubenswrapper[4695]: I1126 14:35:00.422027 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-5sbb9_7645629b-9409-48bb-94cc-a63e3bd1fe4b/control-plane-machine-set-operator/0.log" Nov 26 14:35:00 crc kubenswrapper[4695]: I1126 14:35:00.565500 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-72w4t_6f93277a-4f74-4839-8b28-2ff1bfd6f7ca/kube-rbac-proxy/0.log" Nov 26 14:35:00 crc kubenswrapper[4695]: I1126 14:35:00.566789 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-72w4t_6f93277a-4f74-4839-8b28-2ff1bfd6f7ca/machine-api-operator/0.log" Nov 26 14:35:12 crc kubenswrapper[4695]: I1126 14:35:12.305760 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-dsj6h_cff7ef71-26ea-4334-b5b0-9d6c931d6fff/cert-manager-controller/0.log" Nov 26 14:35:12 crc kubenswrapper[4695]: I1126 14:35:12.466097 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-nczqg_221fa061-b961-4fb4-b8bb-e280873ce253/cert-manager-webhook/0.log" Nov 26 14:35:12 crc kubenswrapper[4695]: I1126 14:35:12.469773 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-g67gn_f548c3df-1cb9-4e28-af35-4471c3633b76/cert-manager-cainjector/0.log" Nov 26 14:35:24 crc kubenswrapper[4695]: I1126 14:35:24.320723 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-4jrpz_2c957cde-292c-4ede-a2c3-dd684372157e/nmstate-console-plugin/0.log" Nov 26 14:35:24 crc kubenswrapper[4695]: I1126 14:35:24.467775 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-72fwr_11fb4b6b-098b-49d0-884e-460720ddfcd5/nmstate-handler/0.log" Nov 26 14:35:24 crc kubenswrapper[4695]: I1126 14:35:24.532227 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-9trxh_e52265d8-4340-457f-824f-c593dc560e5b/nmstate-metrics/0.log" Nov 26 14:35:24 crc kubenswrapper[4695]: I1126 14:35:24.533451 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-9trxh_e52265d8-4340-457f-824f-c593dc560e5b/kube-rbac-proxy/0.log" Nov 26 14:35:24 crc kubenswrapper[4695]: I1126 14:35:24.702866 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-ttqk2_326ba3c5-ae42-4131-99a0-2ef80841d58b/nmstate-operator/0.log" Nov 26 14:35:24 crc kubenswrapper[4695]: I1126 14:35:24.754819 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-b7f9t_61ef0d85-8eb9-4241-958b-12c3a4b4a064/nmstate-webhook/0.log" Nov 26 14:35:36 crc kubenswrapper[4695]: I1126 14:35:36.396971 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:35:36 crc kubenswrapper[4695]: I1126 14:35:36.397495 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:35:38 crc kubenswrapper[4695]: I1126 14:35:38.946571 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7h5k9_8c2eab4a-4615-4dce-a0bb-e3316d4e2be9/kube-rbac-proxy/0.log" Nov 26 14:35:39 crc kubenswrapper[4695]: I1126 14:35:39.094171 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-7h5k9_8c2eab4a-4615-4dce-a0bb-e3316d4e2be9/controller/0.log" Nov 26 14:35:39 crc kubenswrapper[4695]: I1126 14:35:39.844825 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.014909 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.015180 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.015191 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.040835 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.226456 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.249865 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.251723 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.274308 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.435479 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-reloader/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.439308 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-frr-files/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.454844 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/cp-metrics/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.475993 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/controller/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.573982 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/frr-metrics/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.633669 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/kube-rbac-proxy/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.647606 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/kube-rbac-proxy-frr/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.834104 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/reloader/0.log" Nov 26 14:35:40 crc kubenswrapper[4695]: I1126 14:35:40.887209 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-5sn6p_9b894f31-fadd-4034-a93a-d7767eb59691/frr-k8s-webhook-server/0.log" Nov 26 14:35:41 crc kubenswrapper[4695]: I1126 14:35:41.098461 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d6f5fdd48-hdrhk_ae146f87-799d-4013-954b-7b3df8521851/manager/0.log" Nov 26 14:35:41 crc kubenswrapper[4695]: I1126 14:35:41.349365 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-868bccc468-mss5n_e92050c8-b486-4429-ad39-f39f154ff06f/webhook-server/0.log" Nov 26 14:35:41 crc kubenswrapper[4695]: I1126 14:35:41.363644 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4b6f5_bc3d7aa1-897c-44e4-a493-4a80ef1142fe/kube-rbac-proxy/0.log" Nov 26 14:35:41 crc kubenswrapper[4695]: I1126 14:35:41.878554 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4b6f5_bc3d7aa1-897c-44e4-a493-4a80ef1142fe/speaker/0.log" Nov 26 14:35:41 crc kubenswrapper[4695]: I1126 14:35:41.901242 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tdrxb_c37afac1-e7c9-40b4-b458-6c9f84dffdf9/frr/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.466545 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/util/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.638324 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/util/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.657499 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/pull/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.682663 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/pull/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.822397 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/pull/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.827453 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/util/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.841155 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772erpmlc_5d0013c1-72c9-498b-bd7c-702efbd0ea45/extract/0.log" Nov 26 14:35:53 crc kubenswrapper[4695]: I1126 14:35:53.991116 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-utilities/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.117863 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-utilities/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.156266 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-content/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.187024 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-content/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.324122 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-utilities/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.362261 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/extract-content/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.546628 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-utilities/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.739387 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-content/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.763149 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-content/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.845101 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-utilities/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.965883 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-utilities/0.log" Nov 26 14:35:54 crc kubenswrapper[4695]: I1126 14:35:54.983821 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/extract-content/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.013560 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-4p5cl_54427ae6-22d0-4333-bbc2-71746260bc34/registry-server/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.195187 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/util/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.386688 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/util/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.492649 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/pull/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.568371 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/pull/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.750547 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/pull/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.753031 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9gxbw_9dc615cf-d3c4-4af3-a159-b90c9e970ab7/registry-server/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.773444 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/util/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.842384 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c6lvjwd_98eff931-5636-4fab-b319-883648640d79/extract/0.log" Nov 26 14:35:55 crc kubenswrapper[4695]: I1126 14:35:55.950942 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-kgsbp_bfc137bd-03b5-4b18-a610-f713f2681cc1/marketplace-operator/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.058498 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-utilities/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.264847 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-utilities/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.290868 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-content/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.290911 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-content/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.466619 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-utilities/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.507924 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/extract-content/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.592174 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-8kklb_6423e816-b0ce-4d7d-9077-a787cb8d71ba/registry-server/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.706995 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-utilities/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.839826 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-content/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.841261 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-content/0.log" Nov 26 14:35:56 crc kubenswrapper[4695]: I1126 14:35:56.863410 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-utilities/0.log" Nov 26 14:35:57 crc kubenswrapper[4695]: I1126 14:35:57.034914 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-utilities/0.log" Nov 26 14:35:57 crc kubenswrapper[4695]: I1126 14:35:57.056069 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/extract-content/0.log" Nov 26 14:35:57 crc kubenswrapper[4695]: I1126 14:35:57.628586 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qhr75_022ac9aa-fecd-4f12-89ac-c3ed0dd88270/registry-server/0.log" Nov 26 14:36:06 crc kubenswrapper[4695]: I1126 14:36:06.396577 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:36:06 crc kubenswrapper[4695]: I1126 14:36:06.397428 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:36:36 crc kubenswrapper[4695]: I1126 14:36:36.397290 4695 patch_prober.go:28] interesting pod/machine-config-daemon-mmgd2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 26 14:36:36 crc kubenswrapper[4695]: I1126 14:36:36.397812 4695 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 26 14:36:36 crc kubenswrapper[4695]: I1126 14:36:36.397861 4695 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" Nov 26 14:36:36 crc kubenswrapper[4695]: I1126 14:36:36.398655 4695 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec"} pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 26 14:36:36 crc kubenswrapper[4695]: I1126 14:36:36.398705 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerName="machine-config-daemon" containerID="cri-o://724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" gracePeriod=600 Nov 26 14:36:36 crc kubenswrapper[4695]: E1126 14:36:36.536589 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:36:37 crc kubenswrapper[4695]: I1126 14:36:37.259952 4695 generic.go:334] "Generic (PLEG): container finished" podID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" exitCode=0 Nov 26 14:36:37 crc kubenswrapper[4695]: I1126 14:36:37.260036 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerDied","Data":"724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec"} Nov 26 14:36:37 crc kubenswrapper[4695]: I1126 14:36:37.260139 4695 scope.go:117] "RemoveContainer" containerID="ef6fb8c0ab2c4ea2801594a2c8b9b77a9947483ff7104624a94de9342fec41f3" Nov 26 14:36:37 crc kubenswrapper[4695]: I1126 14:36:37.261221 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:36:37 crc kubenswrapper[4695]: E1126 14:36:37.261560 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:36:48 crc kubenswrapper[4695]: I1126 14:36:48.162526 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:36:48 crc kubenswrapper[4695]: E1126 14:36:48.163563 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:37:02 crc kubenswrapper[4695]: I1126 14:37:02.164568 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:37:02 crc kubenswrapper[4695]: E1126 14:37:02.166410 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:37:13 crc kubenswrapper[4695]: I1126 14:37:13.163396 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:37:13 crc kubenswrapper[4695]: E1126 14:37:13.164303 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:37:25 crc kubenswrapper[4695]: I1126 14:37:25.163190 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:37:25 crc kubenswrapper[4695]: E1126 14:37:25.164065 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:37:36 crc kubenswrapper[4695]: I1126 14:37:36.947015 4695 generic.go:334] "Generic (PLEG): container finished" podID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerID="e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada" exitCode=0 Nov 26 14:37:36 crc kubenswrapper[4695]: I1126 14:37:36.947519 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" event={"ID":"73e4ac97-5cec-42a6-9468-74f019b091a6","Type":"ContainerDied","Data":"e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada"} Nov 26 14:37:36 crc kubenswrapper[4695]: I1126 14:37:36.949420 4695 scope.go:117] "RemoveContainer" containerID="e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada" Nov 26 14:37:37 crc kubenswrapper[4695]: I1126 14:37:37.662551 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4c7q6_must-gather-pxx9n_73e4ac97-5cec-42a6-9468-74f019b091a6/gather/0.log" Nov 26 14:37:38 crc kubenswrapper[4695]: I1126 14:37:38.162390 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:37:38 crc kubenswrapper[4695]: E1126 14:37:38.162645 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.152168 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4c4x8"] Nov 26 14:37:40 crc kubenswrapper[4695]: E1126 14:37:40.153068 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d8cacd-e0d2-4987-97cc-fe99bf8394b0" containerName="container-00" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.153087 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d8cacd-e0d2-4987-97cc-fe99bf8394b0" containerName="container-00" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.153319 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d8cacd-e0d2-4987-97cc-fe99bf8394b0" containerName="container-00" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.155016 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.167075 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4c4x8"] Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.296988 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-utilities\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.297126 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-catalog-content\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.297172 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9b72\" (UniqueName: \"kubernetes.io/projected/02c1f391-6ed8-46b0-896c-2a18afb7c221-kube-api-access-g9b72\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.399398 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-utilities\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.399479 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-catalog-content\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.399514 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9b72\" (UniqueName: \"kubernetes.io/projected/02c1f391-6ed8-46b0-896c-2a18afb7c221-kube-api-access-g9b72\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.400219 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-utilities\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.400446 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-catalog-content\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.446007 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9b72\" (UniqueName: \"kubernetes.io/projected/02c1f391-6ed8-46b0-896c-2a18afb7c221-kube-api-access-g9b72\") pod \"redhat-operators-4c4x8\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.494697 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.948555 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4c4x8"] Nov 26 14:37:40 crc kubenswrapper[4695]: I1126 14:37:40.990618 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4x8" event={"ID":"02c1f391-6ed8-46b0-896c-2a18afb7c221","Type":"ContainerStarted","Data":"cb6211c49c19f3f394a836af3edff8119be09e46456f556afabaf956d8bf3bc4"} Nov 26 14:37:42 crc kubenswrapper[4695]: I1126 14:37:42.004700 4695 generic.go:334] "Generic (PLEG): container finished" podID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerID="d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26" exitCode=0 Nov 26 14:37:42 crc kubenswrapper[4695]: I1126 14:37:42.004754 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4x8" event={"ID":"02c1f391-6ed8-46b0-896c-2a18afb7c221","Type":"ContainerDied","Data":"d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26"} Nov 26 14:37:42 crc kubenswrapper[4695]: I1126 14:37:42.008172 4695 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 26 14:37:44 crc kubenswrapper[4695]: I1126 14:37:44.033052 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4x8" event={"ID":"02c1f391-6ed8-46b0-896c-2a18afb7c221","Type":"ContainerStarted","Data":"4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3"} Nov 26 14:37:45 crc kubenswrapper[4695]: I1126 14:37:45.047093 4695 generic.go:334] "Generic (PLEG): container finished" podID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerID="4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3" exitCode=0 Nov 26 14:37:45 crc kubenswrapper[4695]: I1126 14:37:45.047160 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4x8" event={"ID":"02c1f391-6ed8-46b0-896c-2a18afb7c221","Type":"ContainerDied","Data":"4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3"} Nov 26 14:37:46 crc kubenswrapper[4695]: I1126 14:37:46.059545 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4x8" event={"ID":"02c1f391-6ed8-46b0-896c-2a18afb7c221","Type":"ContainerStarted","Data":"1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e"} Nov 26 14:37:46 crc kubenswrapper[4695]: I1126 14:37:46.084132 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4c4x8" podStartSLOduration=2.506334957 podStartE2EDuration="6.084112336s" podCreationTimestamp="2025-11-26 14:37:40 +0000 UTC" firstStartedPulling="2025-11-26 14:37:42.007962325 +0000 UTC m=+4445.643787407" lastFinishedPulling="2025-11-26 14:37:45.585739684 +0000 UTC m=+4449.221564786" observedRunningTime="2025-11-26 14:37:46.074937653 +0000 UTC m=+4449.710762795" watchObservedRunningTime="2025-11-26 14:37:46.084112336 +0000 UTC m=+4449.719937418" Nov 26 14:37:47 crc kubenswrapper[4695]: I1126 14:37:47.970477 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4c7q6/must-gather-pxx9n"] Nov 26 14:37:47 crc kubenswrapper[4695]: I1126 14:37:47.971058 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerName="copy" containerID="cri-o://823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022" gracePeriod=2 Nov 26 14:37:47 crc kubenswrapper[4695]: I1126 14:37:47.983243 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4c7q6/must-gather-pxx9n"] Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.417673 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4c7q6_must-gather-pxx9n_73e4ac97-5cec-42a6-9468-74f019b091a6/copy/0.log" Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.418477 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.555918 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73e4ac97-5cec-42a6-9468-74f019b091a6-must-gather-output\") pod \"73e4ac97-5cec-42a6-9468-74f019b091a6\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.556076 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfhsj\" (UniqueName: \"kubernetes.io/projected/73e4ac97-5cec-42a6-9468-74f019b091a6-kube-api-access-hfhsj\") pod \"73e4ac97-5cec-42a6-9468-74f019b091a6\" (UID: \"73e4ac97-5cec-42a6-9468-74f019b091a6\") " Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.561812 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73e4ac97-5cec-42a6-9468-74f019b091a6-kube-api-access-hfhsj" (OuterVolumeSpecName: "kube-api-access-hfhsj") pod "73e4ac97-5cec-42a6-9468-74f019b091a6" (UID: "73e4ac97-5cec-42a6-9468-74f019b091a6"). InnerVolumeSpecName "kube-api-access-hfhsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.658883 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfhsj\" (UniqueName: \"kubernetes.io/projected/73e4ac97-5cec-42a6-9468-74f019b091a6-kube-api-access-hfhsj\") on node \"crc\" DevicePath \"\"" Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.706770 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73e4ac97-5cec-42a6-9468-74f019b091a6-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "73e4ac97-5cec-42a6-9468-74f019b091a6" (UID: "73e4ac97-5cec-42a6-9468-74f019b091a6"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:37:48 crc kubenswrapper[4695]: I1126 14:37:48.760622 4695 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/73e4ac97-5cec-42a6-9468-74f019b091a6-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.094721 4695 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4c7q6_must-gather-pxx9n_73e4ac97-5cec-42a6-9468-74f019b091a6/copy/0.log" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.095602 4695 generic.go:334] "Generic (PLEG): container finished" podID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerID="823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022" exitCode=143 Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.095912 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4c7q6/must-gather-pxx9n" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.095783 4695 scope.go:117] "RemoveContainer" containerID="823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.143960 4695 scope.go:117] "RemoveContainer" containerID="e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.178946 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" path="/var/lib/kubelet/pods/73e4ac97-5cec-42a6-9468-74f019b091a6/volumes" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.411497 4695 scope.go:117] "RemoveContainer" containerID="823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022" Nov 26 14:37:49 crc kubenswrapper[4695]: E1126 14:37:49.412092 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022\": container with ID starting with 823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022 not found: ID does not exist" containerID="823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.412146 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022"} err="failed to get container status \"823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022\": rpc error: code = NotFound desc = could not find container \"823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022\": container with ID starting with 823ac1c3fb84a4df4bc84eada00eb20a43a3c2f5f3750b29f698eaab74b10022 not found: ID does not exist" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.412180 4695 scope.go:117] "RemoveContainer" containerID="e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada" Nov 26 14:37:49 crc kubenswrapper[4695]: E1126 14:37:49.412579 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada\": container with ID starting with e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada not found: ID does not exist" containerID="e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada" Nov 26 14:37:49 crc kubenswrapper[4695]: I1126 14:37:49.412619 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada"} err="failed to get container status \"e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada\": rpc error: code = NotFound desc = could not find container \"e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada\": container with ID starting with e220e0c6d273e76599e0e1ee125105d576f0e277e88d7356d877c94d4d1ebada not found: ID does not exist" Nov 26 14:37:50 crc kubenswrapper[4695]: I1126 14:37:50.495724 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:50 crc kubenswrapper[4695]: I1126 14:37:50.496009 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:37:51 crc kubenswrapper[4695]: I1126 14:37:51.549239 4695 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4c4x8" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="registry-server" probeResult="failure" output=< Nov 26 14:37:51 crc kubenswrapper[4695]: timeout: failed to connect service ":50051" within 1s Nov 26 14:37:51 crc kubenswrapper[4695]: > Nov 26 14:37:52 crc kubenswrapper[4695]: I1126 14:37:52.162662 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:37:52 crc kubenswrapper[4695]: E1126 14:37:52.162997 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:38:00 crc kubenswrapper[4695]: I1126 14:38:00.690600 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:38:00 crc kubenswrapper[4695]: I1126 14:38:00.747028 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:38:00 crc kubenswrapper[4695]: I1126 14:38:00.931835 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4c4x8"] Nov 26 14:38:02 crc kubenswrapper[4695]: I1126 14:38:02.299129 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4c4x8" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="registry-server" containerID="cri-o://1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e" gracePeriod=2 Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.001111 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.177387 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-catalog-content\") pod \"02c1f391-6ed8-46b0-896c-2a18afb7c221\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.177526 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9b72\" (UniqueName: \"kubernetes.io/projected/02c1f391-6ed8-46b0-896c-2a18afb7c221-kube-api-access-g9b72\") pod \"02c1f391-6ed8-46b0-896c-2a18afb7c221\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.177601 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-utilities\") pod \"02c1f391-6ed8-46b0-896c-2a18afb7c221\" (UID: \"02c1f391-6ed8-46b0-896c-2a18afb7c221\") " Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.178369 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-utilities" (OuterVolumeSpecName: "utilities") pod "02c1f391-6ed8-46b0-896c-2a18afb7c221" (UID: "02c1f391-6ed8-46b0-896c-2a18afb7c221"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.188702 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02c1f391-6ed8-46b0-896c-2a18afb7c221-kube-api-access-g9b72" (OuterVolumeSpecName: "kube-api-access-g9b72") pod "02c1f391-6ed8-46b0-896c-2a18afb7c221" (UID: "02c1f391-6ed8-46b0-896c-2a18afb7c221"). InnerVolumeSpecName "kube-api-access-g9b72". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.269413 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "02c1f391-6ed8-46b0-896c-2a18afb7c221" (UID: "02c1f391-6ed8-46b0-896c-2a18afb7c221"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.279822 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.279861 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9b72\" (UniqueName: \"kubernetes.io/projected/02c1f391-6ed8-46b0-896c-2a18afb7c221-kube-api-access-g9b72\") on node \"crc\" DevicePath \"\"" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.279878 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/02c1f391-6ed8-46b0-896c-2a18afb7c221-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.311111 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4c4x8" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.311141 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4x8" event={"ID":"02c1f391-6ed8-46b0-896c-2a18afb7c221","Type":"ContainerDied","Data":"1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e"} Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.311185 4695 scope.go:117] "RemoveContainer" containerID="1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.311040 4695 generic.go:334] "Generic (PLEG): container finished" podID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerID="1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e" exitCode=0 Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.311266 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4c4x8" event={"ID":"02c1f391-6ed8-46b0-896c-2a18afb7c221","Type":"ContainerDied","Data":"cb6211c49c19f3f394a836af3edff8119be09e46456f556afabaf956d8bf3bc4"} Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.337132 4695 scope.go:117] "RemoveContainer" containerID="4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.358104 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4c4x8"] Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.363158 4695 scope.go:117] "RemoveContainer" containerID="d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.371966 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4c4x8"] Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.413537 4695 scope.go:117] "RemoveContainer" containerID="1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e" Nov 26 14:38:03 crc kubenswrapper[4695]: E1126 14:38:03.414396 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e\": container with ID starting with 1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e not found: ID does not exist" containerID="1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.414438 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e"} err="failed to get container status \"1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e\": rpc error: code = NotFound desc = could not find container \"1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e\": container with ID starting with 1ea8aa47b16eae5932e23b326cdb09ac1fdfad7dd57d93da0a3946255eff5d2e not found: ID does not exist" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.414467 4695 scope.go:117] "RemoveContainer" containerID="4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3" Nov 26 14:38:03 crc kubenswrapper[4695]: E1126 14:38:03.414973 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3\": container with ID starting with 4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3 not found: ID does not exist" containerID="4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.415020 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3"} err="failed to get container status \"4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3\": rpc error: code = NotFound desc = could not find container \"4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3\": container with ID starting with 4dec6a9a1fc3154eae68bd347a55eeca1f0267bac7bbb60dec8f2b9d880abde3 not found: ID does not exist" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.415079 4695 scope.go:117] "RemoveContainer" containerID="d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26" Nov 26 14:38:03 crc kubenswrapper[4695]: E1126 14:38:03.415332 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26\": container with ID starting with d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26 not found: ID does not exist" containerID="d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26" Nov 26 14:38:03 crc kubenswrapper[4695]: I1126 14:38:03.415376 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26"} err="failed to get container status \"d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26\": rpc error: code = NotFound desc = could not find container \"d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26\": container with ID starting with d0d815cc73a4e479ea277e91bcf6cc7589129294f5353cfad1d897c3b2bd0f26 not found: ID does not exist" Nov 26 14:38:05 crc kubenswrapper[4695]: I1126 14:38:05.175398 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" path="/var/lib/kubelet/pods/02c1f391-6ed8-46b0-896c-2a18afb7c221/volumes" Nov 26 14:38:07 crc kubenswrapper[4695]: I1126 14:38:07.168630 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:38:07 crc kubenswrapper[4695]: E1126 14:38:07.168892 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:38:19 crc kubenswrapper[4695]: I1126 14:38:19.162695 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:38:19 crc kubenswrapper[4695]: E1126 14:38:19.163790 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:38:34 crc kubenswrapper[4695]: I1126 14:38:34.162742 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:38:34 crc kubenswrapper[4695]: E1126 14:38:34.163739 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.293269 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fcrkz"] Nov 26 14:38:47 crc kubenswrapper[4695]: E1126 14:38:47.294159 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="registry-server" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294171 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="registry-server" Nov 26 14:38:47 crc kubenswrapper[4695]: E1126 14:38:47.294189 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerName="copy" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294195 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerName="copy" Nov 26 14:38:47 crc kubenswrapper[4695]: E1126 14:38:47.294208 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerName="gather" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294213 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerName="gather" Nov 26 14:38:47 crc kubenswrapper[4695]: E1126 14:38:47.294226 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="extract-content" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294233 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="extract-content" Nov 26 14:38:47 crc kubenswrapper[4695]: E1126 14:38:47.294253 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="extract-utilities" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294259 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="extract-utilities" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294443 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerName="copy" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294457 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="02c1f391-6ed8-46b0-896c-2a18afb7c221" containerName="registry-server" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.294478 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="73e4ac97-5cec-42a6-9468-74f019b091a6" containerName="gather" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.295748 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.301987 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcrkz"] Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.388588 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-utilities\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.388647 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llgzl\" (UniqueName: \"kubernetes.io/projected/8a084ad9-e4e0-40b4-83b9-2819108f643b-kube-api-access-llgzl\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.388756 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-catalog-content\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.491279 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-utilities\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.491618 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llgzl\" (UniqueName: \"kubernetes.io/projected/8a084ad9-e4e0-40b4-83b9-2819108f643b-kube-api-access-llgzl\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.491686 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-catalog-content\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.492075 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-catalog-content\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.492099 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-utilities\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.510939 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llgzl\" (UniqueName: \"kubernetes.io/projected/8a084ad9-e4e0-40b4-83b9-2819108f643b-kube-api-access-llgzl\") pod \"redhat-marketplace-fcrkz\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:47 crc kubenswrapper[4695]: I1126 14:38:47.618659 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:48 crc kubenswrapper[4695]: I1126 14:38:48.063804 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcrkz"] Nov 26 14:38:48 crc kubenswrapper[4695]: I1126 14:38:48.162788 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:38:48 crc kubenswrapper[4695]: E1126 14:38:48.163067 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:38:48 crc kubenswrapper[4695]: I1126 14:38:48.752763 4695 generic.go:334] "Generic (PLEG): container finished" podID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerID="d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed" exitCode=0 Nov 26 14:38:48 crc kubenswrapper[4695]: I1126 14:38:48.752824 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcrkz" event={"ID":"8a084ad9-e4e0-40b4-83b9-2819108f643b","Type":"ContainerDied","Data":"d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed"} Nov 26 14:38:48 crc kubenswrapper[4695]: I1126 14:38:48.753048 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcrkz" event={"ID":"8a084ad9-e4e0-40b4-83b9-2819108f643b","Type":"ContainerStarted","Data":"28a471d36c09c3e2d9aee8390d542e5e7527eb5349fa0aa7ec767ac18158b8da"} Nov 26 14:38:50 crc kubenswrapper[4695]: I1126 14:38:50.771667 4695 generic.go:334] "Generic (PLEG): container finished" podID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerID="4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9" exitCode=0 Nov 26 14:38:50 crc kubenswrapper[4695]: I1126 14:38:50.771704 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcrkz" event={"ID":"8a084ad9-e4e0-40b4-83b9-2819108f643b","Type":"ContainerDied","Data":"4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9"} Nov 26 14:38:52 crc kubenswrapper[4695]: I1126 14:38:52.802788 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcrkz" event={"ID":"8a084ad9-e4e0-40b4-83b9-2819108f643b","Type":"ContainerStarted","Data":"5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c"} Nov 26 14:38:52 crc kubenswrapper[4695]: I1126 14:38:52.822504 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fcrkz" podStartSLOduration=3.003127125 podStartE2EDuration="5.822481633s" podCreationTimestamp="2025-11-26 14:38:47 +0000 UTC" firstStartedPulling="2025-11-26 14:38:48.75584155 +0000 UTC m=+4512.391666632" lastFinishedPulling="2025-11-26 14:38:51.575196058 +0000 UTC m=+4515.211021140" observedRunningTime="2025-11-26 14:38:52.816495652 +0000 UTC m=+4516.452320734" watchObservedRunningTime="2025-11-26 14:38:52.822481633 +0000 UTC m=+4516.458306705" Nov 26 14:38:57 crc kubenswrapper[4695]: I1126 14:38:57.619423 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:57 crc kubenswrapper[4695]: I1126 14:38:57.619988 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:57 crc kubenswrapper[4695]: I1126 14:38:57.688257 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:58 crc kubenswrapper[4695]: I1126 14:38:58.306589 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:38:58 crc kubenswrapper[4695]: I1126 14:38:58.369257 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcrkz"] Nov 26 14:38:59 crc kubenswrapper[4695]: I1126 14:38:59.866684 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fcrkz" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="registry-server" containerID="cri-o://5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c" gracePeriod=2 Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.429597 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.548909 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-utilities\") pod \"8a084ad9-e4e0-40b4-83b9-2819108f643b\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.549202 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-catalog-content\") pod \"8a084ad9-e4e0-40b4-83b9-2819108f643b\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.549382 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llgzl\" (UniqueName: \"kubernetes.io/projected/8a084ad9-e4e0-40b4-83b9-2819108f643b-kube-api-access-llgzl\") pod \"8a084ad9-e4e0-40b4-83b9-2819108f643b\" (UID: \"8a084ad9-e4e0-40b4-83b9-2819108f643b\") " Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.549747 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-utilities" (OuterVolumeSpecName: "utilities") pod "8a084ad9-e4e0-40b4-83b9-2819108f643b" (UID: "8a084ad9-e4e0-40b4-83b9-2819108f643b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.550536 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.566202 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a084ad9-e4e0-40b4-83b9-2819108f643b-kube-api-access-llgzl" (OuterVolumeSpecName: "kube-api-access-llgzl") pod "8a084ad9-e4e0-40b4-83b9-2819108f643b" (UID: "8a084ad9-e4e0-40b4-83b9-2819108f643b"). InnerVolumeSpecName "kube-api-access-llgzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.567582 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a084ad9-e4e0-40b4-83b9-2819108f643b" (UID: "8a084ad9-e4e0-40b4-83b9-2819108f643b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.652011 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a084ad9-e4e0-40b4-83b9-2819108f643b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.652051 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llgzl\" (UniqueName: \"kubernetes.io/projected/8a084ad9-e4e0-40b4-83b9-2819108f643b-kube-api-access-llgzl\") on node \"crc\" DevicePath \"\"" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.877938 4695 generic.go:334] "Generic (PLEG): container finished" podID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerID="5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c" exitCode=0 Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.877982 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcrkz" event={"ID":"8a084ad9-e4e0-40b4-83b9-2819108f643b","Type":"ContainerDied","Data":"5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c"} Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.878006 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fcrkz" event={"ID":"8a084ad9-e4e0-40b4-83b9-2819108f643b","Type":"ContainerDied","Data":"28a471d36c09c3e2d9aee8390d542e5e7527eb5349fa0aa7ec767ac18158b8da"} Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.878024 4695 scope.go:117] "RemoveContainer" containerID="5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.879144 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fcrkz" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.906787 4695 scope.go:117] "RemoveContainer" containerID="4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.927930 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcrkz"] Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.937797 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fcrkz"] Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.957878 4695 scope.go:117] "RemoveContainer" containerID="d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.981215 4695 scope.go:117] "RemoveContainer" containerID="5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c" Nov 26 14:39:00 crc kubenswrapper[4695]: E1126 14:39:00.981694 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c\": container with ID starting with 5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c not found: ID does not exist" containerID="5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.981725 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c"} err="failed to get container status \"5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c\": rpc error: code = NotFound desc = could not find container \"5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c\": container with ID starting with 5d795ad88db8f10cf9defe3d4ee88cb303c945e7bf4750b992fc9ac4f502570c not found: ID does not exist" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.981747 4695 scope.go:117] "RemoveContainer" containerID="4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9" Nov 26 14:39:00 crc kubenswrapper[4695]: E1126 14:39:00.982130 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9\": container with ID starting with 4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9 not found: ID does not exist" containerID="4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.982156 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9"} err="failed to get container status \"4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9\": rpc error: code = NotFound desc = could not find container \"4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9\": container with ID starting with 4908347e8c4ab3bb039741a091e2ca45bce37647f97c55335fa653153eccf5e9 not found: ID does not exist" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.982171 4695 scope.go:117] "RemoveContainer" containerID="d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed" Nov 26 14:39:00 crc kubenswrapper[4695]: E1126 14:39:00.982615 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed\": container with ID starting with d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed not found: ID does not exist" containerID="d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed" Nov 26 14:39:00 crc kubenswrapper[4695]: I1126 14:39:00.982637 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed"} err="failed to get container status \"d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed\": rpc error: code = NotFound desc = could not find container \"d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed\": container with ID starting with d5d54648f3e6537f52e076201beae29f0708c0294b249c21fc50f61b6ce8d9ed not found: ID does not exist" Nov 26 14:39:01 crc kubenswrapper[4695]: I1126 14:39:01.176979 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" path="/var/lib/kubelet/pods/8a084ad9-e4e0-40b4-83b9-2819108f643b/volumes" Nov 26 14:39:02 crc kubenswrapper[4695]: I1126 14:39:02.163500 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:39:02 crc kubenswrapper[4695]: E1126 14:39:02.164079 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:39:17 crc kubenswrapper[4695]: I1126 14:39:17.172419 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:39:17 crc kubenswrapper[4695]: E1126 14:39:17.173183 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:39:32 crc kubenswrapper[4695]: I1126 14:39:32.162374 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:39:32 crc kubenswrapper[4695]: E1126 14:39:32.163978 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:39:46 crc kubenswrapper[4695]: I1126 14:39:46.163618 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:39:46 crc kubenswrapper[4695]: E1126 14:39:46.164693 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:40:01 crc kubenswrapper[4695]: I1126 14:40:01.162397 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:40:01 crc kubenswrapper[4695]: E1126 14:40:01.163894 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:40:12 crc kubenswrapper[4695]: I1126 14:40:12.162214 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:40:12 crc kubenswrapper[4695]: E1126 14:40:12.163050 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:40:24 crc kubenswrapper[4695]: I1126 14:40:24.162484 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:40:24 crc kubenswrapper[4695]: E1126 14:40:24.163428 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:40:38 crc kubenswrapper[4695]: I1126 14:40:38.163645 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:40:38 crc kubenswrapper[4695]: E1126 14:40:38.165321 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:40:53 crc kubenswrapper[4695]: I1126 14:40:53.163543 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:40:53 crc kubenswrapper[4695]: E1126 14:40:53.164461 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:41:04 crc kubenswrapper[4695]: I1126 14:41:04.162448 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:41:04 crc kubenswrapper[4695]: E1126 14:41:04.163458 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:41:16 crc kubenswrapper[4695]: I1126 14:41:16.162795 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:41:16 crc kubenswrapper[4695]: E1126 14:41:16.163530 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:41:30 crc kubenswrapper[4695]: I1126 14:41:30.164876 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:41:30 crc kubenswrapper[4695]: E1126 14:41:30.165954 4695 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mmgd2_openshift-machine-config-operator(73cbd5f2-751e-49c2-b804-e81b9ca46cd4)\"" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" podUID="73cbd5f2-751e-49c2-b804-e81b9ca46cd4" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.288771 4695 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bphlp"] Nov 26 14:41:34 crc kubenswrapper[4695]: E1126 14:41:34.289944 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="extract-content" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.289965 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="extract-content" Nov 26 14:41:34 crc kubenswrapper[4695]: E1126 14:41:34.290001 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="extract-utilities" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.290013 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="extract-utilities" Nov 26 14:41:34 crc kubenswrapper[4695]: E1126 14:41:34.290040 4695 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="registry-server" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.290051 4695 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="registry-server" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.290384 4695 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a084ad9-e4e0-40b4-83b9-2819108f643b" containerName="registry-server" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.295303 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.307243 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bphlp"] Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.407008 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-catalog-content\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.407094 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5ksg\" (UniqueName: \"kubernetes.io/projected/ff14d067-5688-4058-b3a0-a3ed93cec657-kube-api-access-j5ksg\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.407192 4695 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-utilities\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.509215 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-catalog-content\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.509313 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5ksg\" (UniqueName: \"kubernetes.io/projected/ff14d067-5688-4058-b3a0-a3ed93cec657-kube-api-access-j5ksg\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.509411 4695 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-utilities\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.509871 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-catalog-content\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.510135 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-utilities\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.530699 4695 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5ksg\" (UniqueName: \"kubernetes.io/projected/ff14d067-5688-4058-b3a0-a3ed93cec657-kube-api-access-j5ksg\") pod \"community-operators-bphlp\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:34 crc kubenswrapper[4695]: I1126 14:41:34.620416 4695 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:35 crc kubenswrapper[4695]: I1126 14:41:35.177931 4695 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bphlp"] Nov 26 14:41:35 crc kubenswrapper[4695]: W1126 14:41:35.182039 4695 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff14d067_5688_4058_b3a0_a3ed93cec657.slice/crio-870011db9e2f235f9a117cbab0290d7bbee751455e12a49b3a1e9da200197b44 WatchSource:0}: Error finding container 870011db9e2f235f9a117cbab0290d7bbee751455e12a49b3a1e9da200197b44: Status 404 returned error can't find the container with id 870011db9e2f235f9a117cbab0290d7bbee751455e12a49b3a1e9da200197b44 Nov 26 14:41:35 crc kubenswrapper[4695]: I1126 14:41:35.476746 4695 generic.go:334] "Generic (PLEG): container finished" podID="ff14d067-5688-4058-b3a0-a3ed93cec657" containerID="b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d" exitCode=0 Nov 26 14:41:35 crc kubenswrapper[4695]: I1126 14:41:35.476793 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bphlp" event={"ID":"ff14d067-5688-4058-b3a0-a3ed93cec657","Type":"ContainerDied","Data":"b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d"} Nov 26 14:41:35 crc kubenswrapper[4695]: I1126 14:41:35.476821 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bphlp" event={"ID":"ff14d067-5688-4058-b3a0-a3ed93cec657","Type":"ContainerStarted","Data":"870011db9e2f235f9a117cbab0290d7bbee751455e12a49b3a1e9da200197b44"} Nov 26 14:41:36 crc kubenswrapper[4695]: I1126 14:41:36.492029 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bphlp" event={"ID":"ff14d067-5688-4058-b3a0-a3ed93cec657","Type":"ContainerStarted","Data":"ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7"} Nov 26 14:41:37 crc kubenswrapper[4695]: I1126 14:41:37.503923 4695 generic.go:334] "Generic (PLEG): container finished" podID="ff14d067-5688-4058-b3a0-a3ed93cec657" containerID="ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7" exitCode=0 Nov 26 14:41:37 crc kubenswrapper[4695]: I1126 14:41:37.504033 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bphlp" event={"ID":"ff14d067-5688-4058-b3a0-a3ed93cec657","Type":"ContainerDied","Data":"ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7"} Nov 26 14:41:38 crc kubenswrapper[4695]: I1126 14:41:38.522774 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bphlp" event={"ID":"ff14d067-5688-4058-b3a0-a3ed93cec657","Type":"ContainerStarted","Data":"7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c"} Nov 26 14:41:38 crc kubenswrapper[4695]: I1126 14:41:38.549807 4695 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bphlp" podStartSLOduration=1.940336853 podStartE2EDuration="4.549785356s" podCreationTimestamp="2025-11-26 14:41:34 +0000 UTC" firstStartedPulling="2025-11-26 14:41:35.478030457 +0000 UTC m=+4679.113855559" lastFinishedPulling="2025-11-26 14:41:38.08747895 +0000 UTC m=+4681.723304062" observedRunningTime="2025-11-26 14:41:38.542299136 +0000 UTC m=+4682.178124278" watchObservedRunningTime="2025-11-26 14:41:38.549785356 +0000 UTC m=+4682.185610448" Nov 26 14:41:44 crc kubenswrapper[4695]: I1126 14:41:44.621165 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:44 crc kubenswrapper[4695]: I1126 14:41:44.621988 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:44 crc kubenswrapper[4695]: I1126 14:41:44.683671 4695 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:45 crc kubenswrapper[4695]: I1126 14:41:45.164055 4695 scope.go:117] "RemoveContainer" containerID="724275080ef09f514ce25f16abb36ee56eedd29b26af62a6b47883cbc2259aec" Nov 26 14:41:45 crc kubenswrapper[4695]: I1126 14:41:45.608390 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mmgd2" event={"ID":"73cbd5f2-751e-49c2-b804-e81b9ca46cd4","Type":"ContainerStarted","Data":"8698afcd4c0da7514cf466996e9ca8368e83b00066a1333cb5d177e8ead59acf"} Nov 26 14:41:45 crc kubenswrapper[4695]: I1126 14:41:45.671731 4695 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:45 crc kubenswrapper[4695]: I1126 14:41:45.728978 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bphlp"] Nov 26 14:41:47 crc kubenswrapper[4695]: I1126 14:41:47.642400 4695 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bphlp" podUID="ff14d067-5688-4058-b3a0-a3ed93cec657" containerName="registry-server" containerID="cri-o://7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c" gracePeriod=2 Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.096812 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.247873 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-utilities\") pod \"ff14d067-5688-4058-b3a0-a3ed93cec657\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.248238 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5ksg\" (UniqueName: \"kubernetes.io/projected/ff14d067-5688-4058-b3a0-a3ed93cec657-kube-api-access-j5ksg\") pod \"ff14d067-5688-4058-b3a0-a3ed93cec657\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.248457 4695 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-catalog-content\") pod \"ff14d067-5688-4058-b3a0-a3ed93cec657\" (UID: \"ff14d067-5688-4058-b3a0-a3ed93cec657\") " Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.249099 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-utilities" (OuterVolumeSpecName: "utilities") pod "ff14d067-5688-4058-b3a0-a3ed93cec657" (UID: "ff14d067-5688-4058-b3a0-a3ed93cec657"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.254300 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff14d067-5688-4058-b3a0-a3ed93cec657-kube-api-access-j5ksg" (OuterVolumeSpecName: "kube-api-access-j5ksg") pod "ff14d067-5688-4058-b3a0-a3ed93cec657" (UID: "ff14d067-5688-4058-b3a0-a3ed93cec657"). InnerVolumeSpecName "kube-api-access-j5ksg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.306071 4695 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff14d067-5688-4058-b3a0-a3ed93cec657" (UID: "ff14d067-5688-4058-b3a0-a3ed93cec657"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.351289 4695 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.351325 4695 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff14d067-5688-4058-b3a0-a3ed93cec657-utilities\") on node \"crc\" DevicePath \"\"" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.351338 4695 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5ksg\" (UniqueName: \"kubernetes.io/projected/ff14d067-5688-4058-b3a0-a3ed93cec657-kube-api-access-j5ksg\") on node \"crc\" DevicePath \"\"" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.655394 4695 generic.go:334] "Generic (PLEG): container finished" podID="ff14d067-5688-4058-b3a0-a3ed93cec657" containerID="7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c" exitCode=0 Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.655435 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bphlp" event={"ID":"ff14d067-5688-4058-b3a0-a3ed93cec657","Type":"ContainerDied","Data":"7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c"} Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.655491 4695 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bphlp" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.655528 4695 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bphlp" event={"ID":"ff14d067-5688-4058-b3a0-a3ed93cec657","Type":"ContainerDied","Data":"870011db9e2f235f9a117cbab0290d7bbee751455e12a49b3a1e9da200197b44"} Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.655568 4695 scope.go:117] "RemoveContainer" containerID="7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.688745 4695 scope.go:117] "RemoveContainer" containerID="ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7" Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.697306 4695 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bphlp"] Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.706359 4695 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bphlp"] Nov 26 14:41:48 crc kubenswrapper[4695]: I1126 14:41:48.732825 4695 scope.go:117] "RemoveContainer" containerID="b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d" Nov 26 14:41:49 crc kubenswrapper[4695]: I1126 14:41:49.180130 4695 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff14d067-5688-4058-b3a0-a3ed93cec657" path="/var/lib/kubelet/pods/ff14d067-5688-4058-b3a0-a3ed93cec657/volumes" Nov 26 14:41:49 crc kubenswrapper[4695]: I1126 14:41:49.264034 4695 scope.go:117] "RemoveContainer" containerID="7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c" Nov 26 14:41:49 crc kubenswrapper[4695]: E1126 14:41:49.264743 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c\": container with ID starting with 7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c not found: ID does not exist" containerID="7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c" Nov 26 14:41:49 crc kubenswrapper[4695]: I1126 14:41:49.264826 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c"} err="failed to get container status \"7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c\": rpc error: code = NotFound desc = could not find container \"7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c\": container with ID starting with 7b96866c39cb782f91775a2e3da440c5959549f1f97157f63cac3376c0a6c00c not found: ID does not exist" Nov 26 14:41:49 crc kubenswrapper[4695]: I1126 14:41:49.264887 4695 scope.go:117] "RemoveContainer" containerID="ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7" Nov 26 14:41:49 crc kubenswrapper[4695]: E1126 14:41:49.265939 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7\": container with ID starting with ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7 not found: ID does not exist" containerID="ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7" Nov 26 14:41:49 crc kubenswrapper[4695]: I1126 14:41:49.266029 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7"} err="failed to get container status \"ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7\": rpc error: code = NotFound desc = could not find container \"ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7\": container with ID starting with ae358c31e78db58149c41ebd92b1dbbf675a9cdef2cb6c981b5e69f4a73059d7 not found: ID does not exist" Nov 26 14:41:49 crc kubenswrapper[4695]: I1126 14:41:49.266092 4695 scope.go:117] "RemoveContainer" containerID="b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d" Nov 26 14:41:49 crc kubenswrapper[4695]: E1126 14:41:49.267227 4695 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d\": container with ID starting with b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d not found: ID does not exist" containerID="b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d" Nov 26 14:41:49 crc kubenswrapper[4695]: I1126 14:41:49.267265 4695 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d"} err="failed to get container status \"b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d\": rpc error: code = NotFound desc = could not find container \"b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d\": container with ID starting with b525a7013953c64e0fec58b4b0c19c4740587f4b53419525c0cb3e8300ab454d not found: ID does not exist"